Modelling Project Success by Alfi, Jiaying, Himanshu, Sara


Drawing

Problem Solving Strategy

The ideal machine learning project involves general flow analysis stages for building a Predicting Model. Steps followed to perform data analysis:
1.  Understanding the problem domain
2.  Data Exploration and Preparation
3.  Feature Extraction
4.  Dimensionality Reduction (or Feature Selection)
5.  Various Model Evaluation and 
6.  Hyper-parameter Tuning
7.  Ensembling: Model Selection

Each of these stages will be followed throughout this Predictive Model Design.

STEP 1. Understanding the problem domain

Kickstarter - Crowfunding and Success

Kickstarter Is an American public-benefit corporation based in Brooklyn, New York, that maintains a global crowdfunding platform focused on creativity The company's stated mission is to "help bring creative projects to life". Kickstarter has reportedly received more than $1.9 billion in pledges from 9.4 million backers to fund 257,000 creative projects, such as films, music, stage shows, comics, journalism, video games, technology and food-related projects. People who back Kickstarter projects are offered tangible rewards or experiences in exchange for their pledges. This model traces its roots to subscription model of arts patronage, where artists would go directly to their audiences to fund their work.

Project Owner's Perspective:

  1. What is an ideal and optimal range of the funding goal for my project ?
  2. On which day of the week, I should post the project on Kickstarter ?
  3. How many keywords should I use in my project title ?
  4. What should be the total length of my project description ?

Kickstarters Perspective: A large amount of manual effort is required to screen the project before it is approved to be hosted on the platform. Key ingredients for the project to be successfull.

why not a model to predict if a project will be successful before it is released?

List of possible predicting factors:

  • Total amount to be raised - More amount may decrease the chances that the project will be successful.
  • Total duration of the project - It is possible that projects which are active for very short or very long time periods are not successful.
  • Theme of the project - People may consider donating to a project which has a good cause or a good theme.
  • Writing style of the project description - If the message is not very clear, the project may not get complete funding.
  • Length of the project description - Very long piecies of text may not perform good as compared to shorter crisp texts.
  • Project launch time - A project launched on weekdays as compared to weekends or holidays may not get complete funding amount.

Given Dataset

Independent-

ID:

  • ID 378661 non-null int64 -- Unique project Id

Text:

  • name 378657 non-null object -- Project name
  • main_category 378661 non-null object -- Main category - Food/Music/Video
  • category 378661 non-null object -- Sub-category Type of Industry, For ex- Restaurant, Food, Poetry, Product Design

Date:

  • deadline 378661 non-null object -- Crowd Funding Dead line
  • launched 378661 non-null object -- date launched

Categorical: Nominal

  • currency 378661 non-null object -- Type of Currency
  • country 378661 non-null object -- Country

Numerical:

  • goal 378661 non-null float64 -- Goal - The amount of money creator needs to complete the project
  • pledged 378661 non-null float64 -- amount pledged by crowd
  • backers 378661 non-null int64 -- number of supporters
  • usd pledged 374864 non-null float64 -- Pledged amount in USD (conversion made by KS)
  • usd_pledged_real 378661 non-null float64 -- Pledged amount in USD (conversion made by fixer.io api)
  • usd_goal_real 378661 non-null float64 -- Goal amount in USD
Dependent- Nominal
  • state 378661 non-null object -- Project Status - Successfull, failed, canceled, undefined, etc...

STEP 2. Data Exploration and Preparation

Total Projects:  378661 
Total Features:  15
Out[6]:
ID name category main_category currency deadline goal launched pledged state backers country usd pledged usd_pledged_real usd_goal_real
0 1000002330 The Songs of Adelaide & Abullah Poetry Publishing GBP 2015-10-09 1000.0 2015-08-11 12:12:28 0.0 failed 0 GB 0.0 0.0 1533.95
1 1000003930 Greeting From Earth: ZGAC Arts Capsule For ET Narrative Film Film & Video USD 2017-11-01 30000.0 2017-09-02 04:43:57 2421.0 failed 15 US 100.0 2421.0 30000.00
2 1000004038 Where is Hank? Narrative Film Film & Video USD 2013-02-26 45000.0 2013-01-12 00:20:50 220.0 failed 3 US 220.0 220.0 45000.00
3 1000007540 ToshiCapital Rekordz Needs Help to Complete Album Music Music USD 2012-04-16 5000.0 2012-03-17 03:24:11 1.0 failed 1 US 1.0 1.0 5000.00
4 1000011046 Community Film Project: The Art of Neighborhoo... Film & Video Film & Video USD 2015-08-29 19500.0 2015-07-04 08:35:03 1283.0 canceled 14 US 1283.0 1283.0 19500.00

Data Cleaning and Noise Removal

  1. Verify Individual distinct column values
  2. Get rid of unwanted columns (active stage columns)
  3. Remove Duplicates if exist
  4. Handle Missing Values, in this case Delete those rows
  5. Get rid of noise above 2200000 goal amount (all failed)
  6. Project launched during 1970 and 2018 (6 rows) can be removed
  7. Misrepresented data such as "N,0"" in country column must be addressed, it will be cleaned as a part of data cleaning.

Columns which is not useful for analysis are as follows and can be removed: ID, goal, pledged, usd_pledged and currency

Cancelled State - There are 10% of projects in this dataset are in cancelled state. Based on business logic it could imply as failed as the project owner at some point while campaign is live, figured it will not work and cancelled the campaing. There are several other resons as well.

For Example, Project owner got funding from somewhere else or the project requirements changed which let him recreate online crowd funding campaign.

Since there is no clear reason given in this dataset for Project to get cancelled or no date on which it got cancelled. here, Canceled state should be considered as separate state and not failed.

Data Exploration - Scatter Plot for Goal vs Pledged Amount

Observations from the scatter plot presents:

  1. So many projects which are successfull had goal below USD 1 millions.
  2. Most Projects successful raised more than planned goal.
  3. After approximate USD 2 mllions goal, none of those projects were successfull.

Additionally, There is about 13 % of the projects which have not raised single penny and are either cancelled or failed.

Be noted: 1e8 is standard scientific notion, and here it indicates an overall scale factor for the y-axis. That is, if there's a 2 on the y-axis and a 1e8 at the top, the value at 2 actually indicates 21e8 = 2e8 = 2 10^8 = 200,000,000.

Distributions - Outliers and Skew

A general guideline for skewness is that if the number is greater than +1 or lower than –1, this is an indication of a substantially skewed distribution. For kurtosis, the general guideline is that if the number is greater than +1, the distribution is too peaked. Likewise, a kurtosis of less than –1 indicates a distribution that is too flat.

Interpreting Skewness: If skewness is less than −1 or greater than +1, the distribution is highly skewed. If skewness is between −1 and −½ or between +½ and +1, the distribution is moderately skewed. If skewness is between −½ and +½, the distribution is approximately symmetric.

state               -0.271761
backers             86.294188
usd_pledged_real    82.063085
usd_goal_real       12.765938
dtype: float64

Numeric variables such as backers, usd_pledged_real, usd_goal_real are higly right skewed because of so many failed instances not having single backers or pledged amount raised. This will be addressed through data normalization while developing a model.

To explore these data it needs to be transformed and then histogram should be created to visualize distributions.

Distributions of Monetory Columns against Class Variable - State

               state  usd_goal_real_log  usd_pledged_real_log
count  369678.000000      369678.000000         369678.000000
mean        1.257500           8.632460              5.775453
std         0.632728           1.671539              3.309677
min         0.000000           0.009950              0.000000
25%         1.000000           7.601402              3.526361
50%         1.000000           8.612685              6.456770
75%         2.000000           9.662097              8.314587
max         2.000000          14.591996             16.828050
Minimum goal amount is as small as 0.01
This is the format of your plot grid:
[ (1,1) x1,y1           -      ]
[ (2,1) x2,y2 ]  [ (2,2) x3,y3 ]

Dataset Amount values are highly right skewed and to view distributions it must be log transformed.

Logarithm: Log of a variable is a common transformation method used to change the shape of distribution of the variable on a distribution plot. It is generally used for reducing right skewness of variables. Though, It can’t be applied to zero or negative values as well.

Distribution shows:

  • Successful Projects had relatively small fundraising goals compare to failed or cancelled Projects.
  • Cancelled and Failed Project goal amount is high after median.
  • 16 % of pledged amount is around 1 USD.

Analysing further the Categorys:

Sucessful category's frequency failed category's frequency General Goal Distribuition by Category

This is the format of your plot grid:
[ (1,1) x1,y1 ]  [ (1,2) x2,y2 ]
[ (2,1) x3,y3 ]  [ (2,2) x4,y4 ]

(369678, 11)
(369670, 11)
Descriptive status count by year
state             0      1      2
launched_year                    
2009            150    600    579
2010            926   4981   4593
2011           2139  11875  12171
2012           2627  20575  17892
2013           3686  21652  19402
2014           7394  38119  21106
2015           8900  44128  20971
2016           7073  30330  18675
2017           5756  24908  18462

STEP 3. Feature Extraction

  1. Time Data: Launch Year, Launch Month, Launch Day, is_weekend, duration

  2. Categorical Data: Create Dummies for Main Category and Country

Categorical Levels: main_category(15) and category(159) are different level of categories. That should be combined or used significant column from those two which adds more value towards predicting class variable. SImilarly, country and currency must be addressed. Once finalized dummy encoding the column would be used for training and testing machine learning model.

Backers - Number of people supporting the project.

  1. Numerical Data: Generate Number of Projects and Mean Goal Amount for each Main category and Sub category, Difference in mean main_category and mean sub category to goal amount.

Numerical Data: Goal - Total fund needed to execute the project and pledged amount is amount raised so far. usd_pledged_real and usd_pledged goal is USD conversion from different currencies using online conversion API.

  1. Text Feature Extraction: Text Information: name is project name and different text features can be extracted using feature extraction techniques.

Identify values from Project name column. Extract Length, Percentage of Punctuations, Syllable Count, Character Count, Number of Words, Stopwords Count, Capitalized word counts, Number of numeric values and then Clean the data for plotting word cloud

Time: Launched and deadline can be used to identify and extract time related features.

-- Text Features can be used but not used Bag of words, TF-IDF

Out[48]:
Art Comics Crafts Dance Design Fashion Film & Video Food Games Journalism ... launched_week launched_day is_weekend duration mean_category_goal category_count mean_main_category_goal main_category_count diff_mean_category_goal diff_pledged_goal_real
0 0 0 0 0 0 0 0 0 0 0 ... 33 1 0 58 5213.996468 16308 11359.38943 2683423 9825.43943 -1533.95
1 0 0 0 0 0 0 0 0 0 0 ... 26 4 0 29 5213.996468 16308 11359.38943 2683423 5298.41943 -6030.67
2 0 0 0 0 0 0 0 0 0 0 ... 10 4 0 29 5213.996468 16308 11359.38943 2683423 9359.38943 -1675.00
3 0 0 0 0 0 0 0 0 0 0 ... 18 3 0 29 5213.996468 16308 11359.38943 2683423 1359.38943 -9899.00
4 0 0 0 0 0 0 0 0 0 0 ... 40 3 0 29 5213.996468 16308 11359.38943 2683423 10601.86943 -715.86

5 rows × 66 columns

STEP 4. Dimensionality Reduction (or Feature Selection)

Correlation Check: 1 is a perfect positive correlation, 0 is no correlation (the values don't seem linked at all), -1 is a perfect negative correlation

We will only select features which has correlation of above 0.5 (taking absolute value) with the output variable. As a general guideline, we should keep those variables which show a decent or high correlation with the target variable.

['main_category', 'state', 'usd_pledged_real', 'usd_goal_real', 'name_len', 'punct%', 'syllable_count', 'num_words', 'avg_word', 'launched_week', 'launched_day', 'mean_category_goal', 'category_count', 'mean_main_category_goal', 'main_category_count', 'diff_mean_category_goal']

Correlation Check Results: Most values are varying but less related and may collectively contribute towards predictions.

High correlated features, one of them should be considered removed based on its correlation with state column:

  • mean_main_category_goal vs mean_category_goal --> both are not related with state and can be removed
  • nu_words vs syllable_count --> both are not related with state and can be removed

Methods:

3. Backward Elimination : As the name suggest, we feed all the possible features to the model at first. We check the performance of the model and then iteratively remove the worst performing features one by one till the overall performance of the model comes in acceptable range. The performance metric used here to evaluate feature performance is pvalue. If the pvalue is above 0.05 then we remove the feature, else we keep it.
4. Recursive Feature Elimination : The RFE method takes the model to be used and the number of required features as input. It then gives the ranking of all the variables, 1 being most important. 

5. Embedded Methods - Regularization - Pennalizes features based on its importance
['Art', 'Comics', 'Crafts', 'Dance', 'Design', 'Fashion', 'Film & Video', 'Food', 'Games', 'Journalism', 'Music', 'Photography', 'Publishing', 'Technology', 'Theater', 'AT', 'AU', 'BE', 'CA', 'CH', 'DE', 'DK', 'ES', 'FR', 'GB', 'HK', 'IE', 'IT', 'JP', 'LU', 'MX', 'NL', 'NO', 'NZ', 'SE', 'SG', 'US', 'main_category', 'backers', 'usd_pledged_real', 'usd_goal_real', 'name_len', 'punct%', 'syllable_count', 'num_words', 'num_chars', 'avg_word', 'num_stopwords', 'num_capitalized', 'launched_year', 'launched_week', 'launched_day', 'duration', 'mean_category_goal', 'mean_main_category_goal', 'main_category_count', 'diff_mean_category_goal']

Output from p-value metric does not reduce much and may not be useful.

Recursive Feature Elimination:

The Recursive Feature Elimination (RFE) method works by recursively removing attributes and building a model on those attributes that remain.

Ref# https://towardsdatascience.com/feature-selection-with-pandas-e3690ad8504b

Before Balancing Shape X: (10000, 62) y:  (10000,)
After Balancing Shape X: (18242, 62) y:  (18242,)
Normalize
0 :  1
1 :  2
2 :  3
3 :  4
4 :  5
5 :  6
6 :  7
7 :  8
8 :  9
11 :  12
Optimum number of features: 12
Score with 12 features: 0.926990
Selected Features:  Index(['backers', 'usd_pledged_real', 'usd_goal_real', 'name_len', 'punct%',
       'syllable_count', 'num_chars', 'avg_word', 'launched_year',
       'launched_week', 'duration', 'diff_mean_category_goal'],
      dtype='object')

Based on Recursive Elimination using RandomForest Classifier - It gives optimal set of features that can be used for training and testing Prediction model.

Embedded Method: Regularization methods are the most commonly used embedded methods which penalize a feature given a coefficient threshold.

STEP 5. Various Model Evaluation

Modelling Classification:-

  • Rebalance number of using data balancing technique ADASYN - Over sampling
  • Save Balanced set of selected feature values for later use such that re-execution of all above steps is not necessary.
Before Balancing Shape X: (369678, 12) y:  (369678,)
After Balancing Shape X: (584054, 12) y:  (584054,)

Load Saved Features and then Normalize It

(584054, 12)
(584054,)

Execute Various Classifier Algorithms and Note Accuracy

  1. Model with Default Parameters
  2. Tuned Model

LogisticRegression

Running Model
Training Set Accuracy:
0.7819956639264793
Test Set Accuracy:
 0.7799693522014194

KNeighborsClassifier

Running Model
Training Set Accuracy:
0.8043592787451795
Test Set Accuracy:
0.6992877687987947

DecisionTreeClassifier

Running Model
Training Set Accuracy:
0.9999978597860214
Test Set Accuracy:
 0.9032368526936676
Running Model
Training Set Accuracy:
0.9519008310450879
Test Set Accuracy:
 0.9077398532672437
0.09226014673275629

GaussianNB

Training Set Accuracy:
[[ 12615 169359   4345]
 [  6631 183514   2118]
 [  4209 129357  57304]]
0.4450471681546469
Test Set Accuracy:
[[ 293 4442  127]
 [ 172 4679   54]
 [ 111 3258 1466]]
0.44089850705382827

RandomForestClassifier

Running Model
Training Set Accuracy:
[[185794    450     75]
 [  1052 191168     43]
 [     6      2 190862]]
0.9971411111033064
Test Set Accuracy:
[[4376  427   59]
 [ 566 4296   43]
 [   4    4 4827]]
0.9244624024106287
Running Model
Training Set Accuracy:
0.9999964878514782
Test Set Accuracy:
 0.9395288316668949
Average number of nodes 111078
Average maximum depth 54
Out[37]:
RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False)

ExtraTreeClassifier

Running Model
Training Set Accuracy:
0.9999964878514782
Test Set Accuracy:
0.9024106286810026
Running Model
Training Set Accuracy:
0.9999964878514782
Test Set Accuracy:
 0.9289823311875086

GradientBoostingClassifier

Running Model
Training Set Accuracy:
[[140175  29581  16563]
 [ 38452 136694  17117]
 [  2427   5044 183399]]
0.8082647879013508
Test Set Accuracy:
[[3592  825  445]
 [1007 3457  441]
 [  53  135 4647]]
0.800986166278592

AdaBoostClassifier

Running Model
Training Set Accuracy:
[[ 71727 108799   5793]
 [ 21617 165843   4803]
 [  5635   1285 183950]]
0.7402204224412242
Test Set Accuracy:
[[1826 2882  154]
 [ 570 4208  127]
 [ 150   26 4659]]
0.7322969456238871
Running Model
Training Set Accuracy:
[[186319      0      0]
 [     2 192261      0]
 [     0      0 190870]]
0.9999964878514782
Test Set Accuracy:
[[4383  433   46]
 [ 406 4470   29]
 [   1    1 4833]]
0.937268867278455

XGboost

Running Model
Training Set Accuracy:
[[131979  34884  19456]
 [ 34849 139105  18309]
 [  2379   6086 182405]]
0.7963603604869243
Test Set Accuracy:
[[3371  974  517]
 [ 915 3512  478]
 [  52  161 4622]]
0.7879057663333789
Running Model
Training Set Accuracy:
[[185952    233    134]
 [    30 192175     58]
 [     0      0 190870]]
0.9992009862113049
Test Set Accuracy:
[[4290  521   51]
 [ 268 4621   16]
 [   0    0 4835]]
0.9413778934392549

BaggingClassifier

Running Model
Training Set Accuracy:
[[185638    585     96]
 [  1139 191101     23]
 [    15     10 190845]]
0.9967196532806979
Test Set Accuracy:
[[4337  475   50]
 [ 567 4323   15]
 [   9    5 4821]]
0.9232296945623887
Running Model
Training Set Accuracy:
0.9638213580775904
Test Set Accuracy:
 0.9263114641829886

Lightgbm

Running Model
Training Set Accuracy:
0.9302241453186573
Test Set Accuracy:
0.925421175181482
Running Model
Training Set Accuracy:
[[160600  22532   3187]
 [  6593 185160    510]
 [    13      0 190857]]
0.942339301644388
Test Set Accuracy:
[[4162  622   78]
 [ 195 4688   22]
 [   1    0 4834]]
0.9371318997397616

STEP 6. Hyper Parameter Tuning using Randomized Search CV -

  • Grid Search looks at all possible combinations of values specified for hyperparameters and gives the best combination.
  • RandomizedSearchCV

DecisionTreeClassifier

Parameters currently in use:

{'class_weight': None,
 'criterion': 'entropy',
 'max_depth': None,
 'max_features': None,
 'max_leaf_nodes': None,
 'min_impurity_decrease': 0.0,
 'min_impurity_split': None,
 'min_samples_leaf': 1,
 'min_samples_split': 2,
 'min_weight_fraction_leaf': 0.0,
 'presort': False,
 'random_state': None,
 'splitter': 'best'}
Parameters Grid:

{'criterion': ['entropy'], 'max_depth': [20, 23, 26, 29, 32, 35, 38]}
Fitting 3 folds for each of 7 candidates, totalling 21 fits
[Parallel(n_jobs=-1)]: Using backend LokyBackend with 8 concurrent workers.
[Parallel(n_jobs=-1)]: Done  17 out of  21 | elapsed:   43.6s remaining:   10.2s
[Parallel(n_jobs=-1)]: Done  21 out of  21 | elapsed:   44.1s finished
Best:

Score:  0.9050859422743269
Estimator:  DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=None,
            splitter='best')

RandomForestClassifier

  • Random Forest

https://www.analyticsvidhya.com/blog/2015/06/tuning-random-forest-model/

  1. To improve the predictive power of the model :

    • max_features:
    • n_estimators :
    • min_sample_leaf
  1. Features which will make the model training easier

    • n_jobs : -1 uses all CPUs
    • random_state : Parameter state
    • oob_score :
  2. XGBosst

  3. Light GBM
Parameters currently in use:

{'bootstrap': True,
 'class_weight': None,
 'criterion': 'gini',
 'max_depth': None,
 'max_features': 'auto',
 'max_leaf_nodes': None,
 'min_impurity_decrease': 0.0,
 'min_impurity_split': None,
 'min_samples_leaf': 1,
 'min_samples_split': 2,
 'min_weight_fraction_leaf': 0.0,
 'n_estimators': 'warn',
 'n_jobs': None,
 'oob_score': False,
 'random_state': None,
 'verbose': 0,
 'warm_start': False}
Parameters Grid:

{'bootstrap': [False],
 'criterion': ['entropy'],
 'max_depth': [10, 35, 60, 85, 110, None],
 'n_estimators': [100, 325, 550, 775, 1000]}
Fitting 3 folds for each of 30 candidates, totalling 90 fits
[Parallel(n_jobs=3)]: Using backend LokyBackend with 3 concurrent workers.
[Parallel(n_jobs=3)]: Done  35 tasks      | elapsed: 215.3min
[Parallel(n_jobs=3)]: Done  90 out of  90 | elapsed: 688.2min finished
Best:

Score:  0.938230087873956
Estimator:  RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=85, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=775, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False)
Out[24]:
mean_fit_time std_fit_time mean_score_time std_score_time param_n_estimators param_max_depth param_criterion param_bootstrap params split0_test_score split1_test_score split2_test_score mean_test_score std_test_score rank_test_score split0_train_score split1_train_score split2_train_score mean_train_score std_train_score
18 1896.376352 20.553711 77.405066 13.674251 775 85 entropy False {'n_estimators': 775, 'max_depth': 85, 'criter... 0.937809 0.938251 0.938630 0.938230 0.000335 1 1.0 0.999997 0.999997 0.999998 0.000001
23 1946.687666 31.408641 53.318974 2.765041 775 110 entropy False {'n_estimators': 775, 'max_depth': 110, 'crite... 0.937793 0.938019 0.938677 0.938163 0.000375 2 1.0 0.999997 0.999997 0.999998 0.000001
19 2461.662404 26.109106 107.406663 10.958111 1000 85 entropy False {'n_estimators': 1000, 'max_depth': 85, 'crite... 0.938004 0.937998 0.938419 0.938141 0.000197 3 1.0 0.999997 0.999997 0.999998 0.000001
29 2448.812481 24.713323 114.961004 9.517741 1000 None entropy False {'n_estimators': 1000, 'max_depth': None, 'cri... 0.937767 0.938098 0.938509 0.938125 0.000303 4 1.0 0.999997 0.999997 0.999998 0.000001
14 2439.444675 30.269853 123.600921 5.194461 1000 60 entropy False {'n_estimators': 1000, 'max_depth': 60, 'crite... 0.937677 0.938025 0.938604 0.938102 0.000382 5 1.0 0.999997 0.999997 0.999998 0.000001

BaggingClassifier

xgboost

LightGBM

STEP 7. KFold and Ensembling: Model Selection

Further Dimensionality Reduction:

6. PCA
7. LDA
8. KPCA 

Since, we know labels. Used LDA with different number of components. Model is underperfroming. We won't be using any of LDA, PCA or KPCA for dimensionality reduction.

Running Model
Training Set Accuracy:
[[186319      0      0]
 [     2 192261      0]
 [     0      0 190870]]
0.9999964878514782
Test Set Accuracy:
[[2918  994  950]
 [1247 2049 1609]
 [1011 1643 2181]]
0.48952198328996027

KFold -

A model can either suffer from underfitting (high bias) if the model is too simple, or it can overfit the training data (high variance) if the model is too complex for the underlying training data.

TunedDecTree  KFold Evaluation:
[Parallel(n_jobs=3)]: Using backend LokyBackend with 3 concurrent workers.
[Parallel(n_jobs=3)]: Done  10 out of  10 | elapsed:   53.5s finished
TunedDecTree  Accuracies: [0.9078074  0.90930004 0.90708742 0.91035209 0.9083326  0.90778822
 0.9053297  0.90796382 0.9078409  0.90894563]
TunedDecTree  Accuracy: 0.9080747822473958
TunedDecTree  Accuracy: 0.0012744890869089748
TunedRForest  KFold Evaluation:
[Parallel(n_jobs=3)]: Using backend LokyBackend with 3 concurrent workers.
[Parallel(n_jobs=3)]: Done  10 out of  10 | elapsed: 13.1min finished
TunedRForest  Accuracies: [0.939188   0.94124258 0.94073333 0.94150496 0.9392923  0.94045131
 0.93913425 0.94083765 0.94046887 0.9403449 ]
TunedRForest  Accuracy: 0.940319815929389
TunedRForest  Accuracy: 0.0008048414543944626
TunedBaggingC  KFold Evaluation:
[Parallel(n_jobs=3)]: Using backend LokyBackend with 3 concurrent workers.
[Parallel(n_jobs=3)]: Done  10 out of  10 | elapsed:  4.9min finished
TunedBaggingC  Accuracies: [0.9258069  0.92770344 0.92668493 0.92884362 0.92649047 0.9275968
 0.92671876 0.92657828 0.92640267 0.92752529]
TunedBaggingC  Accuracy: 0.9270351169198616
TunedBaggingC  Accuracy: 0.0008312427716826074
Tunedmodelxgb  KFold Evaluation:
[Parallel(n_jobs=3)]: Using backend LokyBackend with 3 concurrent workers.
[Parallel(n_jobs=3)]: Done  10 out of  10 | elapsed: 94.7min finished
Tunedmodelxgb  Accuracies: [0.94062796 0.94277034 0.94196256 0.94345421 0.94252349 0.9422952
 0.94122399 0.94185618 0.94155764 0.94375176]
Tunedmodelxgb  Accuracy: 0.9422023327612395
Tunedmodelxgb  Accuracy: 0.0009182073197103619
simplelgbm_model  KFold Evaluation:
[Parallel(n_jobs=3)]: Using backend LokyBackend with 3 concurrent workers.
simplelgbm_model  Accuracies: [0.92880975 0.92912584 0.92937169 0.93123189 0.92852753 0.92912459
 0.92845728 0.93047678 0.92824655 0.93224923]
simplelgbm_model  Accuracy: 0.9295621124628214
simplelgbm_model  Accuracy: 0.0012590128577712595
[Parallel(n_jobs=3)]: Done  10 out of  10 | elapsed:  1.6min finished

Ensemble

https://medium.com/@rrfd/boosting-bagging-and-stacking-ensemble-methods-with-sklearn-and-mlens-a455c0c982de

Combine the decisions from multiple models to improve the overall performance.

1. Max Voting
2. Averaging
3. Weighted Averaging

The main principle behind ensemble modelling is to group weak learners together to form one strong learner.

  • Bagging to decrease the model’s variance; RandomForest
  • Boosting to decreasing the model’s bias, and; XGBoost
  • Stacking to increasing the predictive force of the classifier. Combinations

Creating a Pipeline

A scikit-learn Pipeline as a wrapper around those individual transformers and estimators.

Averaging

Loading...
Predicting...
Averaging...
0.940350636899055
Accuracy: 0.8443130542924309

VotingClassifier and Weighing

Running Model
Training Set Accuracy:
0.9997559056777393
Test Set Accuracy:
 0.9406930557457882

Bagging

  • Mean of: 0.946, std: (+/-) 0.002 [XGBClassifier]
  • Mean of: 0.943, std: (+/-) 0.003 [Bagging XGBClassifier]
Running DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=None,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best')
Mean of: 0.898, std: (+/-) 0.003 [DecisionTreeClassifier]
Mean of: 0.923, std: (+/-) 0.002 [Bagging DecisionTreeClassifier]

Running RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False)
Mean of: 0.942, std: (+/-) 0.004 [RandomForestClassifier]
Mean of: 0.939, std: (+/-) 0.003 [Bagging RandomForestClassifier]

Running ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False)
Mean of: 0.933, std: (+/-) 0.002 [ExtraTreesClassifier]
Mean of: 0.928, std: (+/-) 0.004 [Bagging ExtraTreesClassifier]

Running XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5)
Mean of: 0.946, std: (+/-) 0.002 [XGBClassifier]
Mean of: 0.943, std: (+/-) 0.003 [Bagging XGBClassifier]

Running LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0)
Mean of: 0.943, std: (+/-) 0.002 [LGBMClassifier]
Mean of: 0.943, std: (+/-) 0.002 [Bagging LGBMClassifier]

Running BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)
Mean of: 0.925, std: (+/-) 0.002 [BaggingClassifier]
Mean of: 0.932, std: (+/-) 0.005 [Bagging BaggingClassifier]

Boosting

  • Mean of: 0.943, std: (+/-) 0.002 [LGBMClassifier]
  • Mean of: 0.946, std: (+/-) 0.004 [Boosting LGBMClassifier]
Running DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=None,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best')
Mean of: 0.898, std: (+/-) 0.003 [DecisionTreeClassifier]
Mean of: 0.899, std: (+/-) 0.003 [Boosting DecisionTreeClassifier]

Running RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False)
Mean of: 0.942, std: (+/-) 0.003 [RandomForestClassifier]
Mean of: 0.943, std: (+/-) 0.003 [Boosting RandomForestClassifier]

Running ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False)
Mean of: 0.933, std: (+/-) 0.003 [ExtraTreesClassifier]
Mean of: 0.933, std: (+/-) 0.003 [Boosting ExtraTreesClassifier]

Running XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5)
Mean of: 0.946, std: (+/-) 0.002 [XGBClassifier]
Mean of: 0.335, std: (+/-) 0.000 [Boosting XGBClassifier]

Running LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0)
Mean of: 0.943, std: (+/-) 0.002 [LGBMClassifier]
Mean of: 0.946, std: (+/-) 0.004 [Boosting LGBMClassifier]

Running BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)
Mean of: 0.927, std: (+/-) 0.004 [BaggingClassifier]
Mean of: 0.940, std: (+/-) 0.003 [Boosting BaggingClassifier]

Model Selection - Stacking

Best stacking model is TunedETCLassifier, TunedBaggingC with accuracy of: 0.9560

Running:  [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False)]

Fitting 2 layers
Processing layer-1             done | 00:00:07
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:07

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.9384615384615385 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False)]
Running:  [ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False)]

Fitting 2 layers
Processing layer-1             done | 00:00:01
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:02

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.9428571428571428 [ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False)]
Running:  [XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5)]

Fitting 2 layers
Processing layer-1             done | 00:00:22
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:23

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.9362637362637363 [XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5)]
Running:  [LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0)]

Fitting 2 layers
Processing layer-1             done | 00:00:03
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:03

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.9340659340659341 [LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0)]
Running:  [BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]

Fitting 2 layers
Processing layer-1             done | 00:00:02
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:03

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.9120879120879121 [BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False)]

Fitting 2 layers
Processing layer-1             done | 00:00:09
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:09

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.9428571428571428 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False)]
Running:  [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5)]

Fitting 2 layers
Processing layer-1             done | 00:00:31
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:31

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.9362637362637363 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5)]
Running:  [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0)]

Fitting 2 layers
Processing layer-1             done | 00:00:10
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:10

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.9406593406593406 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0)]
Running:  [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]

Fitting 2 layers
Processing layer-1             done | 00:00:10
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:10

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.945054945054945 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  [ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5)]

Fitting 2 layers
Processing layer-1             done | 00:00:28
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:28

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.9384615384615385 [ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5)]
Running:  [ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0)]

Fitting 2 layers
Processing layer-1             done | 00:00:05
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:05

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.945054945054945 [ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0)]
Running:  [ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]

Fitting 2 layers
Processing layer-1             done | 00:00:04
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:04

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.9560439560439561 [ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  [XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0)]

Fitting 2 layers
Processing layer-1             done | 00:00:26
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:26

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.9362637362637363 [XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0)]
Running:  [XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]

Fitting 2 layers
Processing layer-1             done | 00:00:25
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:26

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.9384615384615385 [XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  [LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]

Fitting 2 layers
Processing layer-1             done | 00:00:06
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:06

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.9384615384615385 [LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5)]

Fitting 2 layers
Processing layer-1             done | 00:00:32
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:32

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.945054945054945 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5)]
Running:  [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0)]

Fitting 2 layers
Processing layer-1             done | 00:00:12
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:12

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.9472527472527472 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0)]
Running:  [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]

Fitting 2 layers
Processing layer-1             done | 00:00:12
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:12

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.9516483516483516 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0)]

Fitting 2 layers
Processing layer-1             done | 00:00:32
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:32

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.9384615384615385 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0)]
Running:  [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]

Fitting 2 layers
Processing layer-1             done | 00:00:34
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:34

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.9384615384615385 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]

Fitting 2 layers
Processing layer-1             done | 00:00:13
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:14

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.9406593406593406 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  [ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0)]

Fitting 2 layers
Processing layer-1             done | 00:00:28
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:29

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.945054945054945 [ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0)]
Running:  [ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]

Fitting 2 layers
Processing layer-1             done | 00:00:29
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:29

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.9428571428571428 [ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  [ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]

Fitting 2 layers
Processing layer-1             done | 00:00:08
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:08

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.9406593406593406 [ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  [XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]

Fitting 2 layers
Processing layer-1             done | 00:00:31
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:31

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.9340659340659341 [XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0)]

Fitting 2 layers
Processing layer-1             done | 00:00:40
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:41

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.945054945054945 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0)]
Running:  [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]

Fitting 2 layers
Processing layer-1             done | 00:00:38
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:38

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.9472527472527472 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]

Fitting 2 layers
Processing layer-1             done | 00:00:17
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:17

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.945054945054945 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]

Fitting 2 layers
Processing layer-1             done | 00:00:39
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:39

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.9384615384615385 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  [ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]

Fitting 2 layers
Processing layer-1             done | 00:00:36
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:36

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.945054945054945 [ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]
Running:  [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]

Fitting 2 layers
Processing layer-1             done | 00:00:41
Processing layer-2             done | 00:00:00
Fit complete                        | 00:00:42

Predicting 2 layers
Processing layer-1             done | 00:00:00
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:00
Accuracy score:  0.945054945054945 [RandomForestClassifier(bootstrap=False, class_weight=None,
            criterion='entropy', max_depth=60, max_features='auto',
            max_leaf_nodes=None, min_impurity_decrease=0.0,
            min_impurity_split=None, min_samples_leaf=1,
            min_samples_split=2, min_weight_fraction_leaf=0.0,
            n_estimators=56, n_jobs=None, oob_score=False,
            random_state=None, verbose=0, warm_start=False), ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,
       colsample_bytree=1, gamma=0, learning_rate=0.1, max_delta_step=0,
       max_depth=60, min_child_weight=1, missing=None, n_estimators=68,
       n_jobs=1, nthread=None, objective='multi:softmax', random_state=0,
       reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=None,
       silent=True, subsample=0.5), LGBMClassifier(boosting_type='gbdt', class_weight=None, colsample_bytree=1.0,
        importance_type='split', learning_rate=0.08, max_depth=-1,
        min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0,
        n_estimators=100, n_jobs=-1, num_leaves=31, objective='multiclass',
        random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True,
        subsample=1.0, subsample_for_bin=200000, subsample_freq=0), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]


Best stacking model is  [ExtraTreesClassifier(bootstrap=False, class_weight=None, criterion='entropy',
           max_depth=60, max_features='auto', max_leaf_nodes=None,
           min_impurity_decrease=0.0, min_impurity_split=None,
           min_samples_leaf=1, min_samples_split=2,
           min_weight_fraction_leaf=0.0, n_estimators=56, n_jobs=None,
           oob_score=False, random_state=None, verbose=0, warm_start=False), BaggingClassifier(base_estimator=DecisionTreeClassifier(class_weight=None, criterion='entropy', max_depth=20,
            max_features=None, max_leaf_nodes=None,
            min_impurity_decrease=0.0, min_impurity_split=None,
            min_samples_leaf=1, min_samples_split=2,
            min_weight_fraction_leaf=0.0, presort=False, random_state=2,
            splitter='best'),
         bootstrap=True, bootstrap_features=False, max_features=1.0,
         max_samples=1.0, n_estimators=10, n_jobs=None, oob_score=False,
         random_state=None, verbose=0, warm_start=False)]  with accuracy of:  0.9560439560439561

Finally Selected Model

Combination of TunedETCLassifier, TunedBaggingC is selected as it gives accuracy of 0.9439 and better on predicting Failed Successfull instances. (Checkout Confusion Matrix Below)

Running Model

Fitting 2 layers
Processing layer-1             done | 00:01:35
Processing layer-2             done | 00:00:00
Fit complete                        | 00:01:35

Predicting 2 layers
Processing layer-1             done | 00:00:15
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:15

Predicting 2 layers
Processing layer-1             done | 00:00:03
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:03
Training Set Accuracy:
0.9995655365623455
Test Set Accuracy:
 0.9423427588155225
Running Model

Fitting 2 layers
Processing layer-1             done | 00:23:59
Processing layer-2             done | 00:00:00
Fit complete                        | 00:24:00

Predicting 2 layers
Processing layer-1             done | 00:00:45
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:46

Predicting 2 layers
Processing layer-1             done | 00:00:11
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:12
Training Set Accuracy:
0.9994585258634158
Test Set Accuracy:
 0.9410415115014853
Running Model

Fitting 2 layers
Processing layer-1             done | 00:23:07
Processing layer-2             done | 00:00:00
Fit complete                        | 00:23:08

Predicting 2 layers
Processing layer-1             done | 00:00:47
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:47

Predicting 2 layers
Processing layer-1             done | 00:00:11
Processing layer-2             done | 00:00:00
Predict complete                    | 00:00:11
Training Set Accuracy:
0.9991824382601773
Test Set Accuracy:
 0.9402624752805815

NN

Running NN
Epoch 1/1500
 - 3s - loss: 0.5979 - acc: 0.7327
Epoch 2/1500
 - 3s - loss: 0.4310 - acc: 0.8100
Epoch 3/1500
 - 3s - loss: 0.3981 - acc: 0.8215
Epoch 4/1500
 - 3s - loss: 0.3852 - acc: 0.8253
Epoch 5/1500
 - 3s - loss: 0.3789 - acc: 0.8273
Epoch 6/1500
 - 3s - loss: 0.3749 - acc: 0.8284
Epoch 7/1500
 - 3s - loss: 0.3720 - acc: 0.8291
Epoch 8/1500
 - 3s - loss: 0.3706 - acc: 0.8298
Epoch 9/1500
 - 3s - loss: 0.3668 - acc: 0.8311
Epoch 10/1500
 - 3s - loss: 0.3648 - acc: 0.8322
Epoch 11/1500
 - 3s - loss: 0.3619 - acc: 0.8340
Epoch 12/1500
 - 3s - loss: 0.3611 - acc: 0.8353
Epoch 13/1500
 - 3s - loss: 0.3595 - acc: 0.8359
Epoch 14/1500
 - 3s - loss: 0.3576 - acc: 0.8366
Epoch 15/1500
 - 3s - loss: 0.3544 - acc: 0.8393
Epoch 16/1500
 - 3s - loss: 0.3551 - acc: 0.8387
Epoch 17/1500
 - 3s - loss: 0.3538 - acc: 0.8397
Epoch 18/1500
 - 3s - loss: 0.3519 - acc: 0.8408
Epoch 19/1500
 - 3s - loss: 0.3514 - acc: 0.8416
Epoch 20/1500
 - 3s - loss: 0.3488 - acc: 0.8430
Epoch 21/1500
 - 3s - loss: 0.3478 - acc: 0.8443
Epoch 22/1500
 - 3s - loss: 0.3448 - acc: 0.8462
Epoch 23/1500
 - 3s - loss: 0.3427 - acc: 0.8484
Epoch 24/1500
 - 3s - loss: 0.3403 - acc: 0.8506
Epoch 25/1500
 - 3s - loss: 0.3343 - acc: 0.8543
Epoch 26/1500
 - 3s - loss: 0.3306 - acc: 0.8568
Epoch 27/1500
 - 3s - loss: 0.3280 - acc: 0.8588
Epoch 28/1500
 - 3s - loss: 0.3248 - acc: 0.8599
Epoch 29/1500
 - 3s - loss: 0.3225 - acc: 0.8621
Epoch 30/1500
 - 3s - loss: 0.3206 - acc: 0.8629
Epoch 31/1500
 - 3s - loss: 0.3189 - acc: 0.8638
Epoch 32/1500
 - 3s - loss: 0.3191 - acc: 0.8642
Epoch 33/1500
 - 3s - loss: 0.3173 - acc: 0.8645
Epoch 34/1500
 - 3s - loss: 0.3166 - acc: 0.8654
Epoch 35/1500
 - 3s - loss: 0.3155 - acc: 0.8654
Epoch 36/1500
 - 3s - loss: 0.3139 - acc: 0.8660
Epoch 37/1500
 - 3s - loss: 0.3141 - acc: 0.8660
Epoch 38/1500
 - 3s - loss: 0.3129 - acc: 0.8663
Epoch 39/1500
 - 3s - loss: 0.3125 - acc: 0.8668
Epoch 40/1500
 - 3s - loss: 0.3122 - acc: 0.8668
Epoch 41/1500
 - 3s - loss: 0.3118 - acc: 0.8668
Epoch 42/1500
 - 3s - loss: 0.3101 - acc: 0.8677
Epoch 43/1500
 - 3s - loss: 0.3110 - acc: 0.8677
Epoch 44/1500
 - 3s - loss: 0.3106 - acc: 0.8673
Epoch 45/1500
 - 3s - loss: 0.3093 - acc: 0.8682
Epoch 46/1500
 - 3s - loss: 0.3101 - acc: 0.8676
Epoch 47/1500
 - 3s - loss: 0.3095 - acc: 0.8678
Epoch 48/1500
 - 3s - loss: 0.3077 - acc: 0.8692
Epoch 49/1500
 - 3s - loss: 0.3072 - acc: 0.8688
Epoch 50/1500
 - 3s - loss: 0.3071 - acc: 0.8688
Epoch 51/1500
 - 3s - loss: 0.3080 - acc: 0.8689
Epoch 52/1500
 - 3s - loss: 0.3077 - acc: 0.8687
Epoch 53/1500
 - 3s - loss: 0.3076 - acc: 0.8689
Epoch 54/1500
 - 3s - loss: 0.3079 - acc: 0.8687
Epoch 55/1500
 - 2s - loss: 0.3085 - acc: 0.8685
Epoch 56/1500
 - 3s - loss: 0.3080 - acc: 0.8686
Epoch 57/1500
 - 3s - loss: 0.3076 - acc: 0.8688
Epoch 58/1500
 - 3s - loss: 0.3061 - acc: 0.8697
Epoch 59/1500
 - 3s - loss: 0.3069 - acc: 0.8692
Epoch 60/1500
 - 3s - loss: 0.3060 - acc: 0.8692
Epoch 61/1500
 - 3s - loss: 0.3054 - acc: 0.8697
Epoch 62/1500
 - 2s - loss: 0.3061 - acc: 0.8695
Epoch 63/1500
 - 3s - loss: 0.3055 - acc: 0.8696
Epoch 64/1500
 - 3s - loss: 0.3053 - acc: 0.8693
Epoch 65/1500
 - 3s - loss: 0.3049 - acc: 0.8703
Epoch 66/1500
 - 2s - loss: 0.3048 - acc: 0.8702
Epoch 67/1500
 - 3s - loss: 0.3037 - acc: 0.8704
Epoch 68/1500
 - 2s - loss: 0.3039 - acc: 0.8706
Epoch 69/1500
 - 3s - loss: 0.3042 - acc: 0.8701
Epoch 70/1500
 - 3s - loss: 0.3026 - acc: 0.8713
Epoch 71/1500
 - 3s - loss: 0.3034 - acc: 0.8703
Epoch 72/1500
 - 3s - loss: 0.3024 - acc: 0.8708
Epoch 73/1500
 - 2s - loss: 0.3020 - acc: 0.8710
Epoch 74/1500
 - 3s - loss: 0.3024 - acc: 0.8709
Epoch 75/1500
 - 3s - loss: 0.3018 - acc: 0.8712
Epoch 76/1500
 - 3s - loss: 0.3020 - acc: 0.8708
Epoch 77/1500
 - 3s - loss: 0.3021 - acc: 0.8710
Epoch 78/1500
 - 3s - loss: 0.3018 - acc: 0.8713
Epoch 79/1500
 - 3s - loss: 0.3015 - acc: 0.8712
Epoch 80/1500
 - 2s - loss: 0.3008 - acc: 0.8717
Epoch 81/1500
 - 3s - loss: 0.3007 - acc: 0.8718
Epoch 82/1500
 - 2s - loss: 0.3019 - acc: 0.8712
Epoch 83/1500
 - 3s - loss: 0.3013 - acc: 0.8711
Epoch 84/1500
 - 3s - loss: 0.3004 - acc: 0.8715
Epoch 85/1500
 - 3s - loss: 0.3002 - acc: 0.8713
Epoch 86/1500
 - 3s - loss: 0.2999 - acc: 0.8721
Epoch 87/1500
 - 3s - loss: 0.3003 - acc: 0.8714
Epoch 88/1500
 - 3s - loss: 0.3002 - acc: 0.8716
Epoch 89/1500
 - 3s - loss: 0.2993 - acc: 0.8722
Epoch 90/1500
 - 3s - loss: 0.2985 - acc: 0.8726
Epoch 91/1500
 - 3s - loss: 0.2993 - acc: 0.8722
Epoch 92/1500
 - 2s - loss: 0.2988 - acc: 0.8719
Epoch 93/1500
 - 2s - loss: 0.2995 - acc: 0.8719
Epoch 94/1500
 - 3s - loss: 0.2991 - acc: 0.8720
Epoch 95/1500
 - 3s - loss: 0.2989 - acc: 0.8721
Epoch 96/1500
 - 3s - loss: 0.2982 - acc: 0.8726
Epoch 97/1500
 - 3s - loss: 0.2979 - acc: 0.8727
Epoch 98/1500
 - 3s - loss: 0.2983 - acc: 0.8726
Epoch 99/1500
 - 3s - loss: 0.2989 - acc: 0.8719
Epoch 100/1500
 - 3s - loss: 0.2982 - acc: 0.8723
Epoch 101/1500
 - 3s - loss: 0.2977 - acc: 0.8725
Epoch 102/1500
 - 3s - loss: 0.2976 - acc: 0.8725
Epoch 103/1500
 - 3s - loss: 0.2976 - acc: 0.8722
Epoch 104/1500
 - 3s - loss: 0.2975 - acc: 0.8724
Epoch 105/1500
 - 3s - loss: 0.2973 - acc: 0.8728
Epoch 106/1500
 - 3s - loss: 0.2975 - acc: 0.8725
Epoch 107/1500
 - 3s - loss: 0.2977 - acc: 0.8727
Epoch 108/1500
 - 3s - loss: 0.2979 - acc: 0.8723
Epoch 109/1500
 - 3s - loss: 0.2971 - acc: 0.8726
Epoch 110/1500
 - 3s - loss: 0.2970 - acc: 0.8732
Epoch 111/1500
 - 3s - loss: 0.2965 - acc: 0.8729
Epoch 112/1500
 - 3s - loss: 0.2972 - acc: 0.8725
Epoch 113/1500
 - 2s - loss: 0.2969 - acc: 0.8730
Epoch 114/1500
 - 3s - loss: 0.2970 - acc: 0.8728
Epoch 115/1500
 - 3s - loss: 0.2966 - acc: 0.8730
Epoch 116/1500
 - 3s - loss: 0.2959 - acc: 0.8732
Epoch 117/1500
 - 3s - loss: 0.2963 - acc: 0.8731
Epoch 118/1500
 - 3s - loss: 0.2963 - acc: 0.8732
Epoch 119/1500
 - 3s - loss: 0.2959 - acc: 0.8733
Epoch 120/1500
 - 3s - loss: 0.2956 - acc: 0.8735
Epoch 121/1500
 - 3s - loss: 0.2957 - acc: 0.8736
Epoch 122/1500
 - 3s - loss: 0.2954 - acc: 0.8737
Epoch 123/1500
 - 3s - loss: 0.2956 - acc: 0.8732
Epoch 124/1500
 - 3s - loss: 0.2953 - acc: 0.8737
Epoch 125/1500
 - 3s - loss: 0.2949 - acc: 0.8738
Epoch 126/1500
 - 3s - loss: 0.2961 - acc: 0.8736
Epoch 127/1500
 - 3s - loss: 0.2957 - acc: 0.8731
Epoch 128/1500
 - 3s - loss: 0.2950 - acc: 0.8736
Epoch 129/1500
 - 3s - loss: 0.2945 - acc: 0.8736
Epoch 130/1500
 - 3s - loss: 0.2940 - acc: 0.8740
Epoch 131/1500
 - 2s - loss: 0.2957 - acc: 0.8736
Epoch 132/1500
 - 3s - loss: 0.2946 - acc: 0.8741
Epoch 133/1500
 - 3s - loss: 0.2946 - acc: 0.8739
Epoch 134/1500
 - 3s - loss: 0.2939 - acc: 0.8743
Epoch 135/1500
 - 3s - loss: 0.2941 - acc: 0.8745
Epoch 136/1500
 - 2s - loss: 0.2938 - acc: 0.8743
Epoch 137/1500
 - 3s - loss: 0.2944 - acc: 0.8743
Epoch 138/1500
 - 3s - loss: 0.2951 - acc: 0.8737
Epoch 139/1500
 - 3s - loss: 0.2932 - acc: 0.8747
Epoch 140/1500
 - 3s - loss: 0.2933 - acc: 0.8747
Epoch 141/1500
 - 3s - loss: 0.2929 - acc: 0.8752
Epoch 142/1500
 - 3s - loss: 0.2942 - acc: 0.8748
Epoch 143/1500
 - 3s - loss: 0.2934 - acc: 0.8747
Epoch 144/1500
 - 2s - loss: 0.2926 - acc: 0.8753
Epoch 145/1500
 - 3s - loss: 0.2931 - acc: 0.8751
Epoch 146/1500
 - 3s - loss: 0.2928 - acc: 0.8750
Epoch 147/1500
 - 3s - loss: 0.2923 - acc: 0.8753
Epoch 148/1500
 - 3s - loss: 0.2934 - acc: 0.8749
Epoch 149/1500
 - 3s - loss: 0.2931 - acc: 0.8752
Epoch 150/1500
 - 3s - loss: 0.2924 - acc: 0.8751
Epoch 151/1500
 - 3s - loss: 0.2924 - acc: 0.8756
Epoch 152/1500
 - 3s - loss: 0.2913 - acc: 0.8760
Epoch 153/1500
 - 3s - loss: 0.2917 - acc: 0.8756
Epoch 154/1500
 - 3s - loss: 0.2929 - acc: 0.8753
Epoch 155/1500
 - 3s - loss: 0.2920 - acc: 0.8754
Epoch 156/1500
 - 3s - loss: 0.2915 - acc: 0.8755
Epoch 157/1500
 - 3s - loss: 0.2912 - acc: 0.8759
Epoch 158/1500
 - 3s - loss: 0.2914 - acc: 0.8758
Epoch 159/1500
 - 3s - loss: 0.2908 - acc: 0.8758
Epoch 160/1500
 - 2s - loss: 0.2915 - acc: 0.8759
Epoch 161/1500
 - 3s - loss: 0.2913 - acc: 0.8759
Epoch 162/1500
 - 3s - loss: 0.2912 - acc: 0.8758
Epoch 163/1500
 - 3s - loss: 0.2905 - acc: 0.8760
Epoch 164/1500
 - 3s - loss: 0.2909 - acc: 0.8760
Epoch 165/1500
 - 3s - loss: 0.2902 - acc: 0.8763
Epoch 166/1500
 - 2s - loss: 0.2904 - acc: 0.8760
Epoch 167/1500
 - 2s - loss: 0.2907 - acc: 0.8760
Epoch 168/1500
 - 2s - loss: 0.2899 - acc: 0.8764
Epoch 169/1500
 - 2s - loss: 0.2900 - acc: 0.8765
Epoch 170/1500
 - 2s - loss: 0.2892 - acc: 0.8765
Epoch 171/1500
 - 3s - loss: 0.2895 - acc: 0.8765
Epoch 172/1500
 - 2s - loss: 0.2904 - acc: 0.8764
Epoch 173/1500
 - 2s - loss: 0.2893 - acc: 0.8763
Epoch 174/1500
 - 2s - loss: 0.2894 - acc: 0.8767
Epoch 175/1500
 - 3s - loss: 0.2895 - acc: 0.8768
Epoch 176/1500
 - 2s - loss: 0.2885 - acc: 0.8767
Epoch 177/1500
 - 3s - loss: 0.2888 - acc: 0.8767
Epoch 178/1500
 - 2s - loss: 0.2890 - acc: 0.8765
Epoch 179/1500
 - 2s - loss: 0.2888 - acc: 0.8769
Epoch 180/1500
 - 2s - loss: 0.2889 - acc: 0.8766
Epoch 181/1500
 - 2s - loss: 0.2883 - acc: 0.8771
Epoch 182/1500
 - 2s - loss: 0.2892 - acc: 0.8768
Epoch 183/1500
 - 2s - loss: 0.2889 - acc: 0.8767
Epoch 184/1500
 - 3s - loss: 0.2879 - acc: 0.8773
Epoch 185/1500
 - 2s - loss: 0.2874 - acc: 0.8776
Epoch 186/1500
 - 2s - loss: 0.2874 - acc: 0.8772
Epoch 187/1500
 - 2s - loss: 0.2872 - acc: 0.8771
Epoch 188/1500
 - 2s - loss: 0.2862 - acc: 0.8779
Epoch 189/1500
 - 2s - loss: 0.2874 - acc: 0.8775
Epoch 190/1500
 - 3s - loss: 0.2876 - acc: 0.8770
Epoch 191/1500
 - 2s - loss: 0.2879 - acc: 0.8769
Epoch 192/1500
 - 2s - loss: 0.2874 - acc: 0.8772
Epoch 193/1500
 - 2s - loss: 0.2874 - acc: 0.8772
Epoch 194/1500
 - 2s - loss: 0.2871 - acc: 0.8775
Epoch 195/1500
 - 3s - loss: 0.2862 - acc: 0.8775
Epoch 196/1500
 - 3s - loss: 0.2866 - acc: 0.8775
Epoch 197/1500
 - 3s - loss: 0.2867 - acc: 0.8775
Epoch 198/1500
 - 3s - loss: 0.2857 - acc: 0.8783
Epoch 199/1500
 - 3s - loss: 0.2858 - acc: 0.8780
Epoch 200/1500
 - 3s - loss: 0.2854 - acc: 0.8780
Epoch 201/1500
 - 2s - loss: 0.2850 - acc: 0.8783
Epoch 202/1500
 - 2s - loss: 0.2853 - acc: 0.8786
Epoch 203/1500
 - 3s - loss: 0.2850 - acc: 0.8782
Epoch 204/1500
 - 2s - loss: 0.2848 - acc: 0.8784
Epoch 205/1500
 - 2s - loss: 0.2850 - acc: 0.8782
Epoch 206/1500
 - 2s - loss: 0.2847 - acc: 0.8786
Epoch 207/1500
 - 2s - loss: 0.2849 - acc: 0.8786
Epoch 208/1500
 - 2s - loss: 0.2847 - acc: 0.8785
Epoch 209/1500
 - 3s - loss: 0.2846 - acc: 0.8789
Epoch 210/1500
 - 2s - loss: 0.2841 - acc: 0.8790
Epoch 211/1500
 - 2s - loss: 0.2857 - acc: 0.8788
Epoch 212/1500
 - 2s - loss: 0.2839 - acc: 0.8793
Epoch 213/1500
 - 2s - loss: 0.2830 - acc: 0.8799
Epoch 214/1500
 - 2s - loss: 0.2824 - acc: 0.8796
Epoch 215/1500
 - 2s - loss: 0.2820 - acc: 0.8800
Epoch 216/1500
 - 3s - loss: 0.2810 - acc: 0.8802
Epoch 217/1500
 - 2s - loss: 0.2804 - acc: 0.8805
Epoch 218/1500
 - 2s - loss: 0.2796 - acc: 0.8803
Epoch 219/1500
 - 2s - loss: 0.2783 - acc: 0.8814
Epoch 220/1500
 - 3s - loss: 0.2776 - acc: 0.8819
Epoch 221/1500
 - 2s - loss: 0.2783 - acc: 0.8814
Epoch 222/1500
 - 3s - loss: 0.2772 - acc: 0.8821
Epoch 223/1500
 - 2s - loss: 0.2774 - acc: 0.8822
Epoch 224/1500
 - 2s - loss: 0.2769 - acc: 0.8822
Epoch 225/1500
 - 2s - loss: 0.2765 - acc: 0.8825
Epoch 226/1500
 - 2s - loss: 0.2767 - acc: 0.8824
Epoch 227/1500
 - 2s - loss: 0.2765 - acc: 0.8823
Epoch 228/1500
 - 3s - loss: 0.2753 - acc: 0.8831
Epoch 229/1500
 - 3s - loss: 0.2752 - acc: 0.8828
Epoch 230/1500
 - 2s - loss: 0.2751 - acc: 0.8832
Epoch 231/1500
 - 2s - loss: 0.2743 - acc: 0.8832
Epoch 232/1500
 - 2s - loss: 0.2743 - acc: 0.8835
Epoch 233/1500
 - 2s - loss: 0.2741 - acc: 0.8835
Epoch 234/1500
 - 2s - loss: 0.2742 - acc: 0.8837
Epoch 235/1500
 - 3s - loss: 0.2742 - acc: 0.8835
Epoch 236/1500
 - 2s - loss: 0.2738 - acc: 0.8839
Epoch 237/1500
 - 2s - loss: 0.2740 - acc: 0.8836
Epoch 238/1500
 - 2s - loss: 0.2740 - acc: 0.8838
Epoch 239/1500
 - 2s - loss: 0.2733 - acc: 0.8840
Epoch 240/1500
 - 2s - loss: 0.2744 - acc: 0.8839
Epoch 241/1500
 - 3s - loss: 0.2735 - acc: 0.8837
Epoch 242/1500
 - 2s - loss: 0.2736 - acc: 0.8839
Epoch 243/1500
 - 3s - loss: 0.2730 - acc: 0.8837
Epoch 244/1500
 - 3s - loss: 0.2739 - acc: 0.8837
Epoch 245/1500
 - 2s - loss: 0.2719 - acc: 0.8842
Epoch 246/1500
 - 2s - loss: 0.2713 - acc: 0.8850
Epoch 247/1500
 - 2s - loss: 0.2707 - acc: 0.8852
Epoch 248/1500
 - 3s - loss: 0.2699 - acc: 0.8857
Epoch 249/1500
 - 2s - loss: 0.2688 - acc: 0.8862
Epoch 250/1500
 - 2s - loss: 0.2690 - acc: 0.8857
Epoch 251/1500
 - 2s - loss: 0.2694 - acc: 0.8862
Epoch 252/1500
 - 2s - loss: 0.2675 - acc: 0.8866
Epoch 253/1500
 - 2s - loss: 0.2675 - acc: 0.8871
Epoch 254/1500
 - 3s - loss: 0.2668 - acc: 0.8869
Epoch 255/1500
 - 2s - loss: 0.2669 - acc: 0.8866
Epoch 256/1500
 - 2s - loss: 0.2659 - acc: 0.8875
Epoch 257/1500
 - 2s - loss: 0.2663 - acc: 0.8875
Epoch 258/1500
 - 2s - loss: 0.2660 - acc: 0.8873
Epoch 259/1500
 - 3s - loss: 0.2647 - acc: 0.8875
Epoch 260/1500
 - 3s - loss: 0.2647 - acc: 0.8872
Epoch 261/1500
 - 3s - loss: 0.2642 - acc: 0.8879
Epoch 262/1500
 - 2s - loss: 0.2624 - acc: 0.8888
Epoch 263/1500
 - 2s - loss: 0.2625 - acc: 0.8884
Epoch 264/1500
 - 2s - loss: 0.2624 - acc: 0.8886
Epoch 265/1500
 - 2s - loss: 0.2623 - acc: 0.8884
Epoch 266/1500
 - 2s - loss: 0.2621 - acc: 0.8886
Epoch 267/1500
 - 3s - loss: 0.2611 - acc: 0.8887
Epoch 268/1500
 - 2s - loss: 0.2616 - acc: 0.8884
Epoch 269/1500
 - 2s - loss: 0.2609 - acc: 0.8892
Epoch 270/1500
 - 2s - loss: 0.2604 - acc: 0.8892
Epoch 271/1500
 - 3s - loss: 0.2613 - acc: 0.8886
Epoch 272/1500
 - 3s - loss: 0.2606 - acc: 0.8889
Epoch 273/1500
 - 3s - loss: 0.2598 - acc: 0.8893
Epoch 274/1500
 - 3s - loss: 0.2603 - acc: 0.8891
Epoch 275/1500
 - 3s - loss: 0.2600 - acc: 0.8895
Epoch 276/1500
 - 2s - loss: 0.2598 - acc: 0.8891
Epoch 277/1500
 - 2s - loss: 0.2599 - acc: 0.8892
Epoch 278/1500
 - 2s - loss: 0.2598 - acc: 0.8889
Epoch 279/1500
 - 3s - loss: 0.2589 - acc: 0.8896
Epoch 280/1500
 - 2s - loss: 0.2588 - acc: 0.8898
Epoch 281/1500
 - 2s - loss: 0.2597 - acc: 0.8892
Epoch 282/1500
 - 2s - loss: 0.2590 - acc: 0.8896
Epoch 283/1500
 - 2s - loss: 0.2590 - acc: 0.8890
Epoch 284/1500
 - 2s - loss: 0.2581 - acc: 0.8901
Epoch 285/1500
 - 3s - loss: 0.2590 - acc: 0.8897
Epoch 286/1500
 - 3s - loss: 0.2584 - acc: 0.8899
Epoch 287/1500
 - 2s - loss: 0.2574 - acc: 0.8902
Epoch 288/1500
 - 2s - loss: 0.2579 - acc: 0.8900
Epoch 289/1500
 - 2s - loss: 0.2570 - acc: 0.8904
Epoch 290/1500
 - 2s - loss: 0.2579 - acc: 0.8898
Epoch 291/1500
 - 2s - loss: 0.2569 - acc: 0.8900
Epoch 292/1500
 - 3s - loss: 0.2566 - acc: 0.8908
Epoch 293/1500
 - 2s - loss: 0.2578 - acc: 0.8896
Epoch 294/1500
 - 2s - loss: 0.2578 - acc: 0.8902
Epoch 295/1500
 - 2s - loss: 0.2568 - acc: 0.8905
Epoch 296/1500
 - 2s - loss: 0.2574 - acc: 0.8902
Epoch 297/1500
 - 2s - loss: 0.2563 - acc: 0.8903
Epoch 298/1500
 - 2s - loss: 0.2566 - acc: 0.8904
Epoch 299/1500
 - 3s - loss: 0.2566 - acc: 0.8905
Epoch 300/1500
 - 2s - loss: 0.2567 - acc: 0.8906
Epoch 301/1500
 - 3s - loss: 0.2572 - acc: 0.8900
Epoch 302/1500
 - 2s - loss: 0.2564 - acc: 0.8902
Epoch 303/1500
 - 2s - loss: 0.2568 - acc: 0.8903
Epoch 304/1500
 - 2s - loss: 0.2553 - acc: 0.8912
Epoch 305/1500
 - 3s - loss: 0.2556 - acc: 0.8908
Epoch 306/1500
 - 2s - loss: 0.2560 - acc: 0.8906
Epoch 307/1500
 - 2s - loss: 0.2559 - acc: 0.8907
Epoch 308/1500
 - 2s - loss: 0.2563 - acc: 0.8904
Epoch 309/1500
 - 2s - loss: 0.2558 - acc: 0.8904
Epoch 310/1500
 - 2s - loss: 0.2559 - acc: 0.8904
Epoch 311/1500
 - 3s - loss: 0.2546 - acc: 0.8911
Epoch 312/1500
 - 3s - loss: 0.2558 - acc: 0.8904
Epoch 313/1500
 - 2s - loss: 0.2558 - acc: 0.8905
Epoch 314/1500
 - 2s - loss: 0.2557 - acc: 0.8906
Epoch 315/1500
 - 2s - loss: 0.2543 - acc: 0.8909
Epoch 316/1500
 - 2s - loss: 0.2551 - acc: 0.8909
Epoch 317/1500
 - 2s - loss: 0.2546 - acc: 0.8911
Epoch 318/1500
 - 3s - loss: 0.2558 - acc: 0.8905
Epoch 319/1500
 - 2s - loss: 0.2552 - acc: 0.8908
Epoch 320/1500
 - 2s - loss: 0.2556 - acc: 0.8905
Epoch 321/1500
 - 3s - loss: 0.2547 - acc: 0.8909
Epoch 322/1500
 - 3s - loss: 0.2547 - acc: 0.8909
Epoch 323/1500
 - 3s - loss: 0.2552 - acc: 0.8906
Epoch 324/1500
 - 3s - loss: 0.2546 - acc: 0.8910
Epoch 325/1500
 - 2s - loss: 0.2550 - acc: 0.8910
Epoch 326/1500
 - 2s - loss: 0.2548 - acc: 0.8907
Epoch 327/1500
 - 2s - loss: 0.2558 - acc: 0.8905
Epoch 328/1500
 - 2s - loss: 0.2550 - acc: 0.8909
Epoch 329/1500
 - 2s - loss: 0.2541 - acc: 0.8908
Epoch 330/1500
 - 2s - loss: 0.2540 - acc: 0.8916
Epoch 331/1500
 - 3s - loss: 0.2540 - acc: 0.8914
Epoch 332/1500
 - 2s - loss: 0.2549 - acc: 0.8906
Epoch 333/1500
 - 2s - loss: 0.2544 - acc: 0.8910
Epoch 334/1500
 - 2s - loss: 0.2541 - acc: 0.8910
Epoch 335/1500
 - 2s - loss: 0.2542 - acc: 0.8909
Epoch 336/1500
 - 2s - loss: 0.2544 - acc: 0.8909
Epoch 337/1500
 - 3s - loss: 0.2546 - acc: 0.8909
Epoch 338/1500
 - 2s - loss: 0.2536 - acc: 0.8918
Epoch 339/1500
 - 2s - loss: 0.2544 - acc: 0.8910
Epoch 340/1500
 - 2s - loss: 0.2542 - acc: 0.8911
Epoch 341/1500
 - 2s - loss: 0.2539 - acc: 0.8912
Epoch 342/1500
 - 2s - loss: 0.2542 - acc: 0.8911
Epoch 343/1500
 - 2s - loss: 0.2540 - acc: 0.8914
Epoch 344/1500
 - 3s - loss: 0.2539 - acc: 0.8912
Epoch 345/1500
 - 2s - loss: 0.2540 - acc: 0.8908
Epoch 346/1500
 - 3s - loss: 0.2534 - acc: 0.8912
Epoch 347/1500
 - 3s - loss: 0.2535 - acc: 0.8914
Epoch 348/1500
 - 3s - loss: 0.2529 - acc: 0.8916
Epoch 349/1500
 - 3s - loss: 0.2537 - acc: 0.8911
Epoch 350/1500
 - 3s - loss: 0.2536 - acc: 0.8914
Epoch 351/1500
 - 3s - loss: 0.2537 - acc: 0.8910
Epoch 352/1500
 - 2s - loss: 0.2544 - acc: 0.8913
Epoch 353/1500
 - 3s - loss: 0.2532 - acc: 0.8918
Epoch 354/1500
 - 2s - loss: 0.2540 - acc: 0.8916
Epoch 355/1500
 - 2s - loss: 0.2535 - acc: 0.8913
Epoch 356/1500
 - 3s - loss: 0.2533 - acc: 0.8912
Epoch 357/1500
 - 2s - loss: 0.2524 - acc: 0.8919
Epoch 358/1500
 - 2s - loss: 0.2530 - acc: 0.8917
Epoch 359/1500
 - 2s - loss: 0.2534 - acc: 0.8913
Epoch 360/1500
 - 2s - loss: 0.2535 - acc: 0.8913
Epoch 361/1500
 - 2s - loss: 0.2527 - acc: 0.8917
Epoch 362/1500
 - 3s - loss: 0.2540 - acc: 0.8907
Epoch 363/1500
 - 2s - loss: 0.2534 - acc: 0.8911
Epoch 364/1500
 - 3s - loss: 0.2520 - acc: 0.8918
Epoch 365/1500
 - 2s - loss: 0.2528 - acc: 0.8915
Epoch 366/1500
 - 2s - loss: 0.2526 - acc: 0.8912
Epoch 367/1500
 - 2s - loss: 0.2523 - acc: 0.8918
Epoch 368/1500
 - 3s - loss: 0.2519 - acc: 0.8918
Epoch 369/1500
 - 3s - loss: 0.2516 - acc: 0.8921
Epoch 370/1500
 - 2s - loss: 0.2519 - acc: 0.8918
Epoch 371/1500
 - 2s - loss: 0.2520 - acc: 0.8919
Epoch 372/1500
 - 2s - loss: 0.2509 - acc: 0.8922
Epoch 373/1500
 - 3s - loss: 0.2515 - acc: 0.8920
Epoch 374/1500
 - 2s - loss: 0.2517 - acc: 0.8917
Epoch 375/1500
 - 3s - loss: 0.2512 - acc: 0.8918
Epoch 376/1500
 - 2s - loss: 0.2515 - acc: 0.8918
Epoch 377/1500
 - 2s - loss: 0.2518 - acc: 0.8920
Epoch 378/1500
 - 2s - loss: 0.2513 - acc: 0.8926
Epoch 379/1500
 - 2s - loss: 0.2514 - acc: 0.8922
Epoch 380/1500
 - 2s - loss: 0.2512 - acc: 0.8921
Epoch 381/1500
 - 3s - loss: 0.2507 - acc: 0.8923
Epoch 382/1500
 - 3s - loss: 0.2514 - acc: 0.8923
Epoch 383/1500
 - 2s - loss: 0.2506 - acc: 0.8923
Epoch 384/1500
 - 2s - loss: 0.2506 - acc: 0.8923
Epoch 385/1500
 - 2s - loss: 0.2513 - acc: 0.8924
Epoch 386/1500
 - 2s - loss: 0.2505 - acc: 0.8925
Epoch 387/1500
 - 2s - loss: 0.2502 - acc: 0.8926
Epoch 388/1500
 - 3s - loss: 0.2517 - acc: 0.8919
Epoch 389/1500
 - 2s - loss: 0.2501 - acc: 0.8925
Epoch 390/1500
 - 2s - loss: 0.2498 - acc: 0.8927
Epoch 391/1500
 - 2s - loss: 0.2501 - acc: 0.8926
Epoch 392/1500
 - 2s - loss: 0.2497 - acc: 0.8924
Epoch 393/1500
 - 2s - loss: 0.2496 - acc: 0.8928
Epoch 394/1500
 - 3s - loss: 0.2494 - acc: 0.8929
Epoch 395/1500
 - 3s - loss: 0.2506 - acc: 0.8924
Epoch 396/1500
 - 2s - loss: 0.2496 - acc: 0.8928
Epoch 397/1500
 - 3s - loss: 0.2503 - acc: 0.8925
Epoch 398/1500
 - 2s - loss: 0.2489 - acc: 0.8930
Epoch 399/1500
 - 2s - loss: 0.2488 - acc: 0.8930
Epoch 400/1500
 - 2s - loss: 0.2494 - acc: 0.8925
Epoch 401/1500
 - 3s - loss: 0.2492 - acc: 0.8932
Epoch 402/1500
 - 2s - loss: 0.2494 - acc: 0.8933
Epoch 403/1500
 - 2s - loss: 0.2483 - acc: 0.8931
Epoch 404/1500
 - 2s - loss: 0.2483 - acc: 0.8932
Epoch 405/1500
 - 2s - loss: 0.2487 - acc: 0.8935
Epoch 406/1500
 - 2s - loss: 0.2487 - acc: 0.8931
Epoch 407/1500
 - 3s - loss: 0.2490 - acc: 0.8929
Epoch 408/1500
 - 3s - loss: 0.2488 - acc: 0.8931
Epoch 409/1500
 - 2s - loss: 0.2484 - acc: 0.8931
Epoch 410/1500
 - 2s - loss: 0.2476 - acc: 0.8936
Epoch 411/1500
 - 2s - loss: 0.2482 - acc: 0.8934
Epoch 412/1500
 - 2s - loss: 0.2492 - acc: 0.8931
Epoch 413/1500
 - 2s - loss: 0.2476 - acc: 0.8937
Epoch 414/1500
 - 3s - loss: 0.2481 - acc: 0.8933
Epoch 415/1500
 - 2s - loss: 0.2490 - acc: 0.8928
Epoch 416/1500
 - 3s - loss: 0.2476 - acc: 0.8934
Epoch 417/1500
 - 3s - loss: 0.2477 - acc: 0.8932
Epoch 418/1500
 - 3s - loss: 0.2470 - acc: 0.8938
Epoch 419/1500
 - 3s - loss: 0.2472 - acc: 0.8936
Epoch 420/1500
 - 3s - loss: 0.2477 - acc: 0.8935
Epoch 421/1500
 - 2s - loss: 0.2480 - acc: 0.8935
Epoch 422/1500
 - 3s - loss: 0.2470 - acc: 0.8936
Epoch 423/1500
 - 3s - loss: 0.2474 - acc: 0.8932
Epoch 424/1500
 - 3s - loss: 0.2479 - acc: 0.8935
Epoch 425/1500
 - 3s - loss: 0.2472 - acc: 0.8938
Epoch 426/1500
 - 3s - loss: 0.2468 - acc: 0.8940
Epoch 427/1500
 - 3s - loss: 0.2469 - acc: 0.8942
Epoch 428/1500
 - 2s - loss: 0.2472 - acc: 0.8934
Epoch 429/1500
 - 2s - loss: 0.2466 - acc: 0.8940
Epoch 430/1500
 - 2s - loss: 0.2470 - acc: 0.8934
Epoch 431/1500
 - 2s - loss: 0.2470 - acc: 0.8938
Epoch 432/1500
 - 3s - loss: 0.2471 - acc: 0.8938
Epoch 433/1500
 - 2s - loss: 0.2483 - acc: 0.8933
Epoch 434/1500
 - 2s - loss: 0.2479 - acc: 0.8932
Epoch 435/1500
 - 2s - loss: 0.2475 - acc: 0.8936
Epoch 436/1500
 - 2s - loss: 0.2471 - acc: 0.8940
Epoch 437/1500
 - 2s - loss: 0.2469 - acc: 0.8939
Epoch 438/1500
 - 2s - loss: 0.2462 - acc: 0.8944
Epoch 439/1500
 - 3s - loss: 0.2464 - acc: 0.8940
Epoch 440/1500
 - 2s - loss: 0.2467 - acc: 0.8939
Epoch 441/1500
 - 2s - loss: 0.2475 - acc: 0.8940
Epoch 442/1500
 - 2s - loss: 0.2471 - acc: 0.8936
Epoch 443/1500
 - 2s - loss: 0.2470 - acc: 0.8938
Epoch 444/1500
 - 2s - loss: 0.2469 - acc: 0.8938
Epoch 445/1500
 - 3s - loss: 0.2465 - acc: 0.8941
Epoch 446/1500
 - 3s - loss: 0.2466 - acc: 0.8939
Epoch 447/1500
 - 3s - loss: 0.2466 - acc: 0.8940
Epoch 448/1500
 - 2s - loss: 0.2460 - acc: 0.8945
Epoch 449/1500
 - 2s - loss: 0.2463 - acc: 0.8943
Epoch 450/1500
 - 2s - loss: 0.2462 - acc: 0.8947
Epoch 451/1500
 - 2s - loss: 0.2474 - acc: 0.8937
Epoch 452/1500
 - 3s - loss: 0.2466 - acc: 0.8941
Epoch 453/1500
 - 2s - loss: 0.2474 - acc: 0.8940
Epoch 454/1500
 - 2s - loss: 0.2457 - acc: 0.8944
Epoch 455/1500
 - 3s - loss: 0.2475 - acc: 0.8936
Epoch 456/1500
 - 3s - loss: 0.2457 - acc: 0.8944
Epoch 457/1500
 - 3s - loss: 0.2459 - acc: 0.8945
Epoch 458/1500
 - 3s - loss: 0.2456 - acc: 0.8944
Epoch 459/1500
 - 2s - loss: 0.2471 - acc: 0.8939
Epoch 460/1500
 - 2s - loss: 0.2468 - acc: 0.8942
Epoch 461/1500
 - 2s - loss: 0.2459 - acc: 0.8941
Epoch 462/1500
 - 2s - loss: 0.2460 - acc: 0.8945
Epoch 463/1500
 - 2s - loss: 0.2467 - acc: 0.8942
Epoch 464/1500
 - 3s - loss: 0.2460 - acc: 0.8943
Epoch 465/1500
 - 2s - loss: 0.2464 - acc: 0.8942
Epoch 466/1500
 - 2s - loss: 0.2460 - acc: 0.8944
Epoch 467/1500
 - 2s - loss: 0.2457 - acc: 0.8945
Epoch 468/1500
 - 2s - loss: 0.2457 - acc: 0.8945
Epoch 469/1500
 - 2s - loss: 0.2459 - acc: 0.8947
Epoch 470/1500
 - 2s - loss: 0.2458 - acc: 0.8947
Epoch 471/1500
 - 3s - loss: 0.2462 - acc: 0.8945
Epoch 472/1500
 - 2s - loss: 0.2463 - acc: 0.8945
Epoch 473/1500
 - 2s - loss: 0.2466 - acc: 0.8943
Epoch 474/1500
 - 2s - loss: 0.2449 - acc: 0.8946
Epoch 475/1500
 - 2s - loss: 0.2456 - acc: 0.8943
Epoch 476/1500
 - 2s - loss: 0.2461 - acc: 0.8947
Epoch 477/1500
 - 3s - loss: 0.2464 - acc: 0.8941
Epoch 478/1500
 - 2s - loss: 0.2465 - acc: 0.8944
Epoch 479/1500
 - 2s - loss: 0.2452 - acc: 0.8947
Epoch 480/1500
 - 2s - loss: 0.2453 - acc: 0.8948
Epoch 481/1500
 - 2s - loss: 0.2458 - acc: 0.8944
Epoch 482/1500
 - 2s - loss: 0.2456 - acc: 0.8946
Epoch 483/1500
 - 2s - loss: 0.2453 - acc: 0.8953
Epoch 484/1500
 - 3s - loss: 0.2450 - acc: 0.8950
Epoch 485/1500
 - 2s - loss: 0.2449 - acc: 0.8950
Epoch 486/1500
 - 2s - loss: 0.2460 - acc: 0.8947
Epoch 487/1500
 - 2s - loss: 0.2446 - acc: 0.8950
Epoch 488/1500
 - 2s - loss: 0.2456 - acc: 0.8949
Epoch 489/1500
 - 2s - loss: 0.2448 - acc: 0.8952
Epoch 490/1500
 - 3s - loss: 0.2444 - acc: 0.8952
Epoch 491/1500
 - 2s - loss: 0.2451 - acc: 0.8949
Epoch 492/1500
 - 3s - loss: 0.2456 - acc: 0.8948
Epoch 493/1500
 - 2s - loss: 0.2450 - acc: 0.8951
Epoch 494/1500
 - 2s - loss: 0.2446 - acc: 0.8947
Epoch 495/1500
 - 3s - loss: 0.2451 - acc: 0.8948
Epoch 496/1500
 - 3s - loss: 0.2442 - acc: 0.8954
Epoch 497/1500
 - 2s - loss: 0.2446 - acc: 0.8949
Epoch 498/1500
 - 3s - loss: 0.2452 - acc: 0.8948
Epoch 499/1500
 - 3s - loss: 0.2457 - acc: 0.8947
Epoch 500/1500
 - 3s - loss: 0.2447 - acc: 0.8952
Epoch 501/1500
 - 3s - loss: 0.2447 - acc: 0.8949
Epoch 502/1500
 - 3s - loss: 0.2451 - acc: 0.8949
Epoch 503/1500
 - 2s - loss: 0.2448 - acc: 0.8949
Epoch 504/1500
 - 2s - loss: 0.2448 - acc: 0.8950
Epoch 505/1500
 - 2s - loss: 0.2447 - acc: 0.8952
Epoch 506/1500
 - 2s - loss: 0.2455 - acc: 0.8945
Epoch 507/1500
 - 2s - loss: 0.2446 - acc: 0.8949
Epoch 508/1500
 - 2s - loss: 0.2447 - acc: 0.8947
Epoch 509/1500
 - 3s - loss: 0.2451 - acc: 0.8950
Epoch 510/1500
 - 2s - loss: 0.2442 - acc: 0.8951
Epoch 511/1500
 - 3s - loss: 0.2447 - acc: 0.8949
Epoch 512/1500
 - 3s - loss: 0.2446 - acc: 0.8950
Epoch 513/1500
 - 3s - loss: 0.2452 - acc: 0.8949
Epoch 514/1500
 - 3s - loss: 0.2445 - acc: 0.8950
Epoch 515/1500
 - 3s - loss: 0.2443 - acc: 0.8954
Epoch 516/1500
 - 2s - loss: 0.2451 - acc: 0.8949
Epoch 517/1500
 - 3s - loss: 0.2438 - acc: 0.8954
Epoch 518/1500
 - 2s - loss: 0.2445 - acc: 0.8948
Epoch 519/1500
 - 3s - loss: 0.2446 - acc: 0.8949
Epoch 520/1500
 - 3s - loss: 0.2439 - acc: 0.8953
Epoch 521/1500
 - 3s - loss: 0.2443 - acc: 0.8953
Epoch 522/1500
 - 3s - loss: 0.2442 - acc: 0.8949
Epoch 523/1500
 - 3s - loss: 0.2439 - acc: 0.8957
Epoch 524/1500
 - 3s - loss: 0.2437 - acc: 0.8950
Epoch 525/1500
 - 3s - loss: 0.2442 - acc: 0.8952
Epoch 526/1500
 - 3s - loss: 0.2445 - acc: 0.8954
Epoch 527/1500
 - 3s - loss: 0.2430 - acc: 0.8955
Epoch 528/1500
 - 2s - loss: 0.2438 - acc: 0.8957
Epoch 529/1500
 - 2s - loss: 0.2437 - acc: 0.8954
Epoch 530/1500
 - 3s - loss: 0.2433 - acc: 0.8954
Epoch 531/1500
 - 3s - loss: 0.2442 - acc: 0.8950
Epoch 532/1500
 - 2s - loss: 0.2445 - acc: 0.8949
Epoch 533/1500
 - 2s - loss: 0.2431 - acc: 0.8954
Epoch 534/1500
 - 3s - loss: 0.2440 - acc: 0.8953
Epoch 535/1500
 - 2s - loss: 0.2429 - acc: 0.8954
Epoch 536/1500
 - 2s - loss: 0.2429 - acc: 0.8956
Epoch 537/1500
 - 2s - loss: 0.2438 - acc: 0.8955
Epoch 538/1500
 - 3s - loss: 0.2432 - acc: 0.8957
Epoch 539/1500
 - 3s - loss: 0.2439 - acc: 0.8953
Epoch 540/1500
 - 3s - loss: 0.2443 - acc: 0.8950
Epoch 541/1500
 - 2s - loss: 0.2441 - acc: 0.8951
Epoch 542/1500
 - 2s - loss: 0.2435 - acc: 0.8953
Epoch 543/1500
 - 3s - loss: 0.2444 - acc: 0.8948
Epoch 544/1500
 - 2s - loss: 0.2435 - acc: 0.8951
Epoch 545/1500
 - 2s - loss: 0.2440 - acc: 0.8951
Epoch 546/1500
 - 3s - loss: 0.2434 - acc: 0.8954
Epoch 547/1500
 - 2s - loss: 0.2434 - acc: 0.8952
Epoch 548/1500
 - 2s - loss: 0.2441 - acc: 0.8953
Epoch 549/1500
 - 2s - loss: 0.2432 - acc: 0.8955
Epoch 550/1500
 - 2s - loss: 0.2423 - acc: 0.8961
Epoch 551/1500
 - 2s - loss: 0.2442 - acc: 0.8951
Epoch 552/1500
 - 2s - loss: 0.2434 - acc: 0.8955
Epoch 553/1500
 - 3s - loss: 0.2430 - acc: 0.8955
Epoch 554/1500
 - 2s - loss: 0.2425 - acc: 0.8958
Epoch 555/1500
 - 2s - loss: 0.2429 - acc: 0.8959
Epoch 556/1500
 - 3s - loss: 0.2429 - acc: 0.8958
Epoch 557/1500
 - 3s - loss: 0.2428 - acc: 0.8957
Epoch 558/1500
 - 3s - loss: 0.2435 - acc: 0.8955
Epoch 559/1500
 - 3s - loss: 0.2425 - acc: 0.8955
Epoch 560/1500
 - 2s - loss: 0.2432 - acc: 0.8956
Epoch 561/1500
 - 2s - loss: 0.2439 - acc: 0.8948
Epoch 562/1500
 - 2s - loss: 0.2436 - acc: 0.8954
Epoch 563/1500
 - 2s - loss: 0.2427 - acc: 0.8957
Epoch 564/1500
 - 2s - loss: 0.2430 - acc: 0.8954
Epoch 565/1500
 - 3s - loss: 0.2425 - acc: 0.8955
Epoch 566/1500
 - 3s - loss: 0.2426 - acc: 0.8958
Epoch 567/1500
 - 2s - loss: 0.2423 - acc: 0.8958
Epoch 568/1500
 - 2s - loss: 0.2421 - acc: 0.8962
Epoch 569/1500
 - 3s - loss: 0.2426 - acc: 0.8957
Epoch 570/1500
 - 3s - loss: 0.2438 - acc: 0.8953
Epoch 571/1500
 - 2s - loss: 0.2427 - acc: 0.8958
Epoch 572/1500
 - 3s - loss: 0.2427 - acc: 0.8956
Epoch 573/1500
 - 3s - loss: 0.2422 - acc: 0.8959
Epoch 574/1500
 - 3s - loss: 0.2433 - acc: 0.8954
Epoch 575/1500
 - 3s - loss: 0.2431 - acc: 0.8955
Epoch 576/1500
 - 3s - loss: 0.2427 - acc: 0.8960
Epoch 577/1500
 - 3s - loss: 0.2432 - acc: 0.8953
Epoch 578/1500
 - 3s - loss: 0.2426 - acc: 0.8958
Epoch 579/1500
 - 2s - loss: 0.2434 - acc: 0.8954
Epoch 580/1500
 - 2s - loss: 0.2422 - acc: 0.8962
Epoch 581/1500
 - 2s - loss: 0.2435 - acc: 0.8952
Epoch 582/1500
 - 2s - loss: 0.2425 - acc: 0.8957
Epoch 583/1500
 - 3s - loss: 0.2424 - acc: 0.8957
Epoch 584/1500
 - 3s - loss: 0.2424 - acc: 0.8958
Epoch 585/1500
 - 2s - loss: 0.2425 - acc: 0.8956
Epoch 586/1500
 - 3s - loss: 0.2420 - acc: 0.8959
Epoch 587/1500
 - 2s - loss: 0.2425 - acc: 0.8960
Epoch 588/1500
 - 2s - loss: 0.2410 - acc: 0.8967
Epoch 589/1500
 - 2s - loss: 0.2420 - acc: 0.8961
Epoch 590/1500
 - 2s - loss: 0.2416 - acc: 0.8957
Epoch 591/1500
 - 3s - loss: 0.2429 - acc: 0.8953
Epoch 592/1500
 - 2s - loss: 0.2434 - acc: 0.8956
Epoch 593/1500
 - 2s - loss: 0.2422 - acc: 0.8958
Epoch 594/1500
 - 2s - loss: 0.2431 - acc: 0.8954
Epoch 595/1500
 - 2s - loss: 0.2413 - acc: 0.8963
Epoch 596/1500
 - 2s - loss: 0.2417 - acc: 0.8959
Epoch 597/1500
 - 3s - loss: 0.2418 - acc: 0.8960
Epoch 598/1500
 - 2s - loss: 0.2418 - acc: 0.8962
Epoch 599/1500
 - 2s - loss: 0.2411 - acc: 0.8965
Epoch 600/1500
 - 2s - loss: 0.2425 - acc: 0.8957
Epoch 601/1500
 - 3s - loss: 0.2417 - acc: 0.8963
Epoch 602/1500
 - 3s - loss: 0.2416 - acc: 0.8963
Epoch 603/1500
 - 3s - loss: 0.2410 - acc: 0.8965
Epoch 604/1500
 - 2s - loss: 0.2414 - acc: 0.8966
Epoch 605/1500
 - 3s - loss: 0.2410 - acc: 0.8966
Epoch 606/1500
 - 2s - loss: 0.2405 - acc: 0.8967
Epoch 607/1500
 - 2s - loss: 0.2417 - acc: 0.8960
Epoch 608/1500
 - 2s - loss: 0.2416 - acc: 0.8964
Epoch 609/1500
 - 2s - loss: 0.2403 - acc: 0.8965
Epoch 610/1500
 - 3s - loss: 0.2407 - acc: 0.8965
Epoch 611/1500
 - 2s - loss: 0.2400 - acc: 0.8971
Epoch 612/1500
 - 2s - loss: 0.2404 - acc: 0.8968
Epoch 613/1500
 - 2s - loss: 0.2398 - acc: 0.8970
Epoch 614/1500
 - 2s - loss: 0.2404 - acc: 0.8967
Epoch 615/1500
 - 2s - loss: 0.2399 - acc: 0.8967
Epoch 616/1500
 - 3s - loss: 0.2397 - acc: 0.8971
Epoch 617/1500
 - 2s - loss: 0.2404 - acc: 0.8969
Epoch 618/1500
 - 2s - loss: 0.2396 - acc: 0.8973
Epoch 619/1500
 - 2s - loss: 0.2395 - acc: 0.8971
Epoch 620/1500
 - 2s - loss: 0.2393 - acc: 0.8967
Epoch 621/1500
 - 2s - loss: 0.2399 - acc: 0.8964
Epoch 622/1500
 - 2s - loss: 0.2393 - acc: 0.8969
Epoch 623/1500
 - 3s - loss: 0.2394 - acc: 0.8971
Epoch 624/1500
 - 2s - loss: 0.2393 - acc: 0.8968
Epoch 625/1500
 - 2s - loss: 0.2389 - acc: 0.8968
Epoch 626/1500
 - 2s - loss: 0.2390 - acc: 0.8969
Epoch 627/1500
 - 2s - loss: 0.2397 - acc: 0.8969
Epoch 628/1500
 - 2s - loss: 0.2393 - acc: 0.8966
Epoch 629/1500
 - 3s - loss: 0.2388 - acc: 0.8969
Epoch 630/1500
 - 2s - loss: 0.2397 - acc: 0.8968
Epoch 631/1500
 - 2s - loss: 0.2380 - acc: 0.8974
Epoch 632/1500
 - 2s - loss: 0.2392 - acc: 0.8967
Epoch 633/1500
 - 2s - loss: 0.2391 - acc: 0.8967
Epoch 634/1500
 - 2s - loss: 0.2397 - acc: 0.8965
Epoch 635/1500
 - 2s - loss: 0.2379 - acc: 0.8972
Epoch 636/1500
 - 3s - loss: 0.2394 - acc: 0.8962
Epoch 637/1500
 - 2s - loss: 0.2391 - acc: 0.8968
Epoch 638/1500
 - 2s - loss: 0.2388 - acc: 0.8965
Epoch 639/1500
 - 2s - loss: 0.2389 - acc: 0.8967
Epoch 640/1500
 - 2s - loss: 0.2379 - acc: 0.8970
Epoch 641/1500
 - 2s - loss: 0.2381 - acc: 0.8973
Epoch 642/1500
 - 3s - loss: 0.2381 - acc: 0.8971
Epoch 643/1500
 - 2s - loss: 0.2373 - acc: 0.8972
Epoch 644/1500
 - 2s - loss: 0.2383 - acc: 0.8972
Epoch 645/1500
 - 2s - loss: 0.2403 - acc: 0.8966
Epoch 646/1500
 - 2s - loss: 0.2386 - acc: 0.8969
Epoch 647/1500
 - 2s - loss: 0.2398 - acc: 0.8959
Epoch 648/1500
 - 3s - loss: 0.2388 - acc: 0.8967
Epoch 649/1500
 - 3s - loss: 0.2385 - acc: 0.8972
Epoch 650/1500
 - 3s - loss: 0.2395 - acc: 0.8966
Epoch 651/1500
 - 3s - loss: 0.2378 - acc: 0.8974
Epoch 652/1500
 - 3s - loss: 0.2385 - acc: 0.8967
Epoch 653/1500
 - 3s - loss: 0.2386 - acc: 0.8967
Epoch 654/1500
 - 2s - loss: 0.2385 - acc: 0.8968
Epoch 655/1500
 - 3s - loss: 0.2383 - acc: 0.8968
Epoch 656/1500
 - 2s - loss: 0.2383 - acc: 0.8968
Epoch 657/1500
 - 2s - loss: 0.2381 - acc: 0.8964
Epoch 658/1500
 - 2s - loss: 0.2383 - acc: 0.8969
Epoch 659/1500
 - 3s - loss: 0.2385 - acc: 0.8968
Epoch 660/1500
 - 2s - loss: 0.2378 - acc: 0.8967
Epoch 661/1500
 - 3s - loss: 0.2385 - acc: 0.8971
Epoch 662/1500
 - 2s - loss: 0.2380 - acc: 0.8973
Epoch 663/1500
 - 2s - loss: 0.2380 - acc: 0.8969
Epoch 664/1500
 - 3s - loss: 0.2389 - acc: 0.8966
Epoch 665/1500
 - 2s - loss: 0.2375 - acc: 0.8971
Epoch 666/1500
 - 3s - loss: 0.2378 - acc: 0.8974
Epoch 667/1500
 - 3s - loss: 0.2375 - acc: 0.8975
Epoch 668/1500
 - 2s - loss: 0.2377 - acc: 0.8972
Epoch 669/1500
 - 2s - loss: 0.2379 - acc: 0.8970
Epoch 670/1500
 - 2s - loss: 0.2372 - acc: 0.8972
Epoch 671/1500
 - 2s - loss: 0.2372 - acc: 0.8975
Epoch 672/1500
 - 2s - loss: 0.2374 - acc: 0.8971
Epoch 673/1500
 - 2s - loss: 0.2374 - acc: 0.8969
Epoch 674/1500
 - 3s - loss: 0.2376 - acc: 0.8974
Epoch 675/1500
 - 2s - loss: 0.2371 - acc: 0.8967
Epoch 676/1500
 - 3s - loss: 0.2363 - acc: 0.8976
Epoch 677/1500
 - 2s - loss: 0.2365 - acc: 0.8977
Epoch 678/1500
 - 2s - loss: 0.2369 - acc: 0.8976
Epoch 679/1500
 - 3s - loss: 0.2373 - acc: 0.8968
Epoch 680/1500
 - 3s - loss: 0.2374 - acc: 0.8972
Epoch 681/1500
 - 2s - loss: 0.2363 - acc: 0.8975
Epoch 682/1500
 - 2s - loss: 0.2368 - acc: 0.8974
Epoch 683/1500
 - 2s - loss: 0.2371 - acc: 0.8971
Epoch 684/1500
 - 2s - loss: 0.2373 - acc: 0.8971
Epoch 685/1500
 - 2s - loss: 0.2364 - acc: 0.8974
Epoch 686/1500
 - 3s - loss: 0.2368 - acc: 0.8976
Epoch 687/1500
 - 3s - loss: 0.2371 - acc: 0.8970
Epoch 688/1500
 - 2s - loss: 0.2365 - acc: 0.8972
Epoch 689/1500
 - 2s - loss: 0.2368 - acc: 0.8970
Epoch 690/1500
 - 2s - loss: 0.2364 - acc: 0.8976
Epoch 691/1500
 - 2s - loss: 0.2368 - acc: 0.8972
Epoch 692/1500
 - 2s - loss: 0.2365 - acc: 0.8974
Epoch 693/1500
 - 3s - loss: 0.2363 - acc: 0.8974
Epoch 694/1500
 - 2s - loss: 0.2367 - acc: 0.8972
Epoch 695/1500
 - 2s - loss: 0.2365 - acc: 0.8973
Epoch 696/1500
 - 2s - loss: 0.2368 - acc: 0.8976
Epoch 697/1500
 - 2s - loss: 0.2360 - acc: 0.8976
Epoch 698/1500
 - 3s - loss: 0.2357 - acc: 0.8977
Epoch 699/1500
 - 3s - loss: 0.2366 - acc: 0.8971
Epoch 700/1500
 - 2s - loss: 0.2372 - acc: 0.8974
Epoch 701/1500
 - 2s - loss: 0.2365 - acc: 0.8976
Epoch 702/1500
 - 2s - loss: 0.2364 - acc: 0.8973
Epoch 703/1500
 - 2s - loss: 0.2355 - acc: 0.8975
Epoch 704/1500
 - 3s - loss: 0.2369 - acc: 0.8971
Epoch 705/1500
 - 3s - loss: 0.2360 - acc: 0.8976
Epoch 706/1500
 - 3s - loss: 0.2368 - acc: 0.8975
Epoch 707/1500
 - 2s - loss: 0.2362 - acc: 0.8977
Epoch 708/1500
 - 2s - loss: 0.2362 - acc: 0.8975
Epoch 709/1500
 - 2s - loss: 0.2367 - acc: 0.8970
Epoch 710/1500
 - 2s - loss: 0.2365 - acc: 0.8973
Epoch 711/1500
 - 2s - loss: 0.2375 - acc: 0.8973
Epoch 712/1500
 - 3s - loss: 0.2364 - acc: 0.8973
Epoch 713/1500
 - 2s - loss: 0.2364 - acc: 0.8973
Epoch 714/1500
 - 2s - loss: 0.2359 - acc: 0.8978
Epoch 715/1500
 - 2s - loss: 0.2366 - acc: 0.8971
Epoch 716/1500
 - 2s - loss: 0.2365 - acc: 0.8973
Epoch 717/1500
 - 2s - loss: 0.2366 - acc: 0.8973
Epoch 718/1500
 - 2s - loss: 0.2374 - acc: 0.8968
Epoch 719/1500
 - 3s - loss: 0.2355 - acc: 0.8978
Epoch 720/1500
 - 2s - loss: 0.2364 - acc: 0.8974
Epoch 721/1500
 - 2s - loss: 0.2363 - acc: 0.8972
Epoch 722/1500
 - 3s - loss: 0.2360 - acc: 0.8979
Epoch 723/1500
 - 2s - loss: 0.2354 - acc: 0.8978
Epoch 724/1500
 - 3s - loss: 0.2368 - acc: 0.8973
Epoch 725/1500
 - 3s - loss: 0.2359 - acc: 0.8972
Epoch 726/1500
 - 3s - loss: 0.2356 - acc: 0.8977
Epoch 727/1500
 - 3s - loss: 0.2356 - acc: 0.8976
Epoch 728/1500
 - 3s - loss: 0.2357 - acc: 0.8975
Epoch 729/1500
 - 3s - loss: 0.2361 - acc: 0.8974
Epoch 730/1500
 - 2s - loss: 0.2359 - acc: 0.8975
Epoch 731/1500
 - 3s - loss: 0.2358 - acc: 0.8976
Epoch 732/1500
 - 2s - loss: 0.2353 - acc: 0.8980
Epoch 733/1500
 - 2s - loss: 0.2357 - acc: 0.8975
Epoch 734/1500
 - 3s - loss: 0.2360 - acc: 0.8978
Epoch 735/1500
 - 2s - loss: 0.2365 - acc: 0.8974
Epoch 736/1500
 - 2s - loss: 0.2364 - acc: 0.8970
Epoch 737/1500
 - 3s - loss: 0.2345 - acc: 0.8980
Epoch 738/1500
 - 2s - loss: 0.2354 - acc: 0.8978
Epoch 739/1500
 - 2s - loss: 0.2357 - acc: 0.8977
Epoch 740/1500
 - 2s - loss: 0.2354 - acc: 0.8976
Epoch 741/1500
 - 2s - loss: 0.2353 - acc: 0.8977
Epoch 742/1500
 - 2s - loss: 0.2360 - acc: 0.8975
Epoch 743/1500
 - 2s - loss: 0.2354 - acc: 0.8975
Epoch 744/1500
 - 3s - loss: 0.2356 - acc: 0.8976
Epoch 745/1500
 - 3s - loss: 0.2354 - acc: 0.8978
Epoch 746/1500
 - 3s - loss: 0.2357 - acc: 0.8976
Epoch 747/1500
 - 2s - loss: 0.2347 - acc: 0.8980
Epoch 748/1500
 - 2s - loss: 0.2354 - acc: 0.8979
Epoch 749/1500
 - 2s - loss: 0.2356 - acc: 0.8978
Epoch 750/1500
 - 3s - loss: 0.2351 - acc: 0.8979
Epoch 751/1500
 - 2s - loss: 0.2350 - acc: 0.8980
Epoch 752/1500
 - 2s - loss: 0.2353 - acc: 0.8978
Epoch 753/1500
 - 2s - loss: 0.2354 - acc: 0.8976
Epoch 754/1500
 - 2s - loss: 0.2349 - acc: 0.8979
Epoch 755/1500
 - 2s - loss: 0.2355 - acc: 0.8977
Epoch 756/1500
 - 3s - loss: 0.2360 - acc: 0.8976
Epoch 757/1500
 - 3s - loss: 0.2350 - acc: 0.8981
Epoch 758/1500
 - 2s - loss: 0.2366 - acc: 0.8975
Epoch 759/1500
 - 2s - loss: 0.2364 - acc: 0.8974
Epoch 760/1500
 - 2s - loss: 0.2359 - acc: 0.8977
Epoch 761/1500
 - 2s - loss: 0.2366 - acc: 0.8973
Epoch 762/1500
 - 2s - loss: 0.2354 - acc: 0.8981
Epoch 763/1500
 - 3s - loss: 0.2354 - acc: 0.8977
Epoch 764/1500
 - 2s - loss: 0.2354 - acc: 0.8974
Epoch 765/1500
 - 2s - loss: 0.2357 - acc: 0.8979
Epoch 766/1500
 - 2s - loss: 0.2344 - acc: 0.8982
Epoch 767/1500
 - 2s - loss: 0.2359 - acc: 0.8974
Epoch 768/1500
 - 2s - loss: 0.2350 - acc: 0.8977
Epoch 769/1500
 - 3s - loss: 0.2350 - acc: 0.8980
Epoch 770/1500
 - 2s - loss: 0.2355 - acc: 0.8976
Epoch 771/1500
 - 2s - loss: 0.2347 - acc: 0.8975
Epoch 772/1500
 - 2s - loss: 0.2358 - acc: 0.8977
Epoch 773/1500
 - 2s - loss: 0.2348 - acc: 0.8978
Epoch 774/1500
 - 2s - loss: 0.2346 - acc: 0.8979
Epoch 775/1500
 - 2s - loss: 0.2354 - acc: 0.8979
Epoch 776/1500
 - 3s - loss: 0.2353 - acc: 0.8980
Epoch 777/1500
 - 2s - loss: 0.2362 - acc: 0.8981
Epoch 778/1500
 - 2s - loss: 0.2349 - acc: 0.8980
Epoch 779/1500
 - 2s - loss: 0.2358 - acc: 0.8978
Epoch 780/1500
 - 2s - loss: 0.2358 - acc: 0.8976
Epoch 781/1500
 - 2s - loss: 0.2361 - acc: 0.8972
Epoch 782/1500
 - 3s - loss: 0.2361 - acc: 0.8975
Epoch 783/1500
 - 2s - loss: 0.2359 - acc: 0.8975
Epoch 784/1500
 - 2s - loss: 0.2354 - acc: 0.8978
Epoch 785/1500
 - 2s - loss: 0.2352 - acc: 0.8979
Epoch 786/1500
 - 2s - loss: 0.2355 - acc: 0.8980
Epoch 787/1500
 - 2s - loss: 0.2356 - acc: 0.8979
Epoch 788/1500
 - 3s - loss: 0.2356 - acc: 0.8974
Epoch 789/1500
 - 3s - loss: 0.2348 - acc: 0.8982
Epoch 790/1500
 - 2s - loss: 0.2355 - acc: 0.8977
Epoch 791/1500
 - 2s - loss: 0.2353 - acc: 0.8976
Epoch 792/1500
 - 2s - loss: 0.2346 - acc: 0.8983
Epoch 793/1500
 - 2s - loss: 0.2348 - acc: 0.8979
Epoch 794/1500
 - 2s - loss: 0.2355 - acc: 0.8976
Epoch 795/1500
 - 3s - loss: 0.2355 - acc: 0.8980
Epoch 796/1500
 - 2s - loss: 0.2349 - acc: 0.8981
Epoch 797/1500
 - 2s - loss: 0.2356 - acc: 0.8977
Epoch 798/1500
 - 2s - loss: 0.2343 - acc: 0.8979
Epoch 799/1500
 - 3s - loss: 0.2350 - acc: 0.8980
Epoch 800/1500
 - 3s - loss: 0.2352 - acc: 0.8980
Epoch 801/1500
 - 3s - loss: 0.2359 - acc: 0.8977
Epoch 802/1500
 - 3s - loss: 0.2344 - acc: 0.8980
Epoch 803/1500
 - 3s - loss: 0.2348 - acc: 0.8983
Epoch 804/1500
 - 3s - loss: 0.2350 - acc: 0.8981
Epoch 805/1500
 - 2s - loss: 0.2343 - acc: 0.8982
Epoch 806/1500
 - 2s - loss: 0.2344 - acc: 0.8982
Epoch 807/1500
 - 2s - loss: 0.2348 - acc: 0.8981
Epoch 808/1500
 - 3s - loss: 0.2353 - acc: 0.8979
Epoch 809/1500
 - 2s - loss: 0.2339 - acc: 0.8986
Epoch 810/1500
 - 2s - loss: 0.2342 - acc: 0.8984
Epoch 811/1500
 - 2s - loss: 0.2345 - acc: 0.8979
Epoch 812/1500
 - 2s - loss: 0.2356 - acc: 0.8980
Epoch 813/1500
 - 2s - loss: 0.2348 - acc: 0.8982
Epoch 814/1500
 - 3s - loss: 0.2352 - acc: 0.8982
Epoch 815/1500
 - 2s - loss: 0.2348 - acc: 0.8983
Epoch 816/1500
 - 3s - loss: 0.2345 - acc: 0.8981
Epoch 817/1500
 - 2s - loss: 0.2343 - acc: 0.8987
Epoch 818/1500
 - 2s - loss: 0.2346 - acc: 0.8979
Epoch 819/1500
 - 2s - loss: 0.2338 - acc: 0.8986
Epoch 820/1500
 - 3s - loss: 0.2341 - acc: 0.8982
Epoch 821/1500
 - 2s - loss: 0.2345 - acc: 0.8983
Epoch 822/1500
 - 2s - loss: 0.2354 - acc: 0.8977
Epoch 823/1500
 - 2s - loss: 0.2344 - acc: 0.8985
Epoch 824/1500
 - 2s - loss: 0.2340 - acc: 0.8983
Epoch 825/1500
 - 2s - loss: 0.2337 - acc: 0.8984
Epoch 826/1500
 - 2s - loss: 0.2340 - acc: 0.8983
Epoch 827/1500
 - 3s - loss: 0.2344 - acc: 0.8981
Epoch 828/1500
 - 2s - loss: 0.2340 - acc: 0.8982
Epoch 829/1500
 - 2s - loss: 0.2347 - acc: 0.8982
Epoch 830/1500
 - 3s - loss: 0.2335 - acc: 0.8984
Epoch 831/1500
 - 2s - loss: 0.2350 - acc: 0.8978
Epoch 832/1500
 - 2s - loss: 0.2344 - acc: 0.8981
Epoch 833/1500
 - 3s - loss: 0.2346 - acc: 0.8984
Epoch 834/1500
 - 2s - loss: 0.2342 - acc: 0.8984
Epoch 835/1500
 - 2s - loss: 0.2345 - acc: 0.8985
Epoch 836/1500
 - 2s - loss: 0.2351 - acc: 0.8980
Epoch 837/1500
 - 3s - loss: 0.2342 - acc: 0.8981
Epoch 838/1500
 - 2s - loss: 0.2345 - acc: 0.8981
Epoch 839/1500
 - 2s - loss: 0.2347 - acc: 0.8982
Epoch 840/1500
 - 3s - loss: 0.2352 - acc: 0.8979
Epoch 841/1500
 - 2s - loss: 0.2340 - acc: 0.8983
Epoch 842/1500
 - 3s - loss: 0.2352 - acc: 0.8981
Epoch 843/1500
 - 2s - loss: 0.2341 - acc: 0.8982
Epoch 844/1500
 - 2s - loss: 0.2343 - acc: 0.8983
Epoch 845/1500
 - 2s - loss: 0.2338 - acc: 0.8980
Epoch 846/1500
 - 3s - loss: 0.2350 - acc: 0.8981
Epoch 847/1500
 - 2s - loss: 0.2342 - acc: 0.8983
Epoch 848/1500
 - 2s - loss: 0.2338 - acc: 0.8986
Epoch 849/1500
 - 2s - loss: 0.2338 - acc: 0.8983
Epoch 850/1500
 - 2s - loss: 0.2335 - acc: 0.8987
Epoch 851/1500
 - 2s - loss: 0.2338 - acc: 0.8983
Epoch 852/1500
 - 2s - loss: 0.2342 - acc: 0.8982
Epoch 853/1500
 - 3s - loss: 0.2334 - acc: 0.8984
Epoch 854/1500
 - 2s - loss: 0.2339 - acc: 0.8985
Epoch 855/1500
 - 2s - loss: 0.2337 - acc: 0.8985
Epoch 856/1500
 - 2s - loss: 0.2344 - acc: 0.8982
Epoch 857/1500
 - 2s - loss: 0.2348 - acc: 0.8983
Epoch 858/1500
 - 3s - loss: 0.2338 - acc: 0.8986
Epoch 859/1500
 - 3s - loss: 0.2336 - acc: 0.8983
Epoch 860/1500
 - 2s - loss: 0.2337 - acc: 0.8987
Epoch 861/1500
 - 2s - loss: 0.2355 - acc: 0.8981
Epoch 862/1500
 - 2s - loss: 0.2343 - acc: 0.8983
Epoch 863/1500
 - 2s - loss: 0.2339 - acc: 0.8983
Epoch 864/1500
 - 2s - loss: 0.2340 - acc: 0.8988
Epoch 865/1500
 - 3s - loss: 0.2341 - acc: 0.8982
Epoch 866/1500
 - 2s - loss: 0.2343 - acc: 0.8983
Epoch 867/1500
 - 2s - loss: 0.2339 - acc: 0.8984
Epoch 868/1500
 - 3s - loss: 0.2345 - acc: 0.8983
Epoch 869/1500
 - 2s - loss: 0.2339 - acc: 0.8983
Epoch 870/1500
 - 2s - loss: 0.2334 - acc: 0.8985
Epoch 871/1500
 - 3s - loss: 0.2333 - acc: 0.8984
Epoch 872/1500
 - 3s - loss: 0.2337 - acc: 0.8984
Epoch 873/1500
 - 2s - loss: 0.2336 - acc: 0.8987
Epoch 874/1500
 - 2s - loss: 0.2342 - acc: 0.8981
Epoch 875/1500
 - 2s - loss: 0.2339 - acc: 0.8981
Epoch 876/1500
 - 3s - loss: 0.2332 - acc: 0.8988
Epoch 877/1500
 - 3s - loss: 0.2342 - acc: 0.8982
Epoch 878/1500
 - 3s - loss: 0.2341 - acc: 0.8986
Epoch 879/1500
 - 3s - loss: 0.2342 - acc: 0.8986
Epoch 880/1500
 - 3s - loss: 0.2333 - acc: 0.8988
Epoch 881/1500
 - 2s - loss: 0.2332 - acc: 0.8987
Epoch 882/1500
 - 3s - loss: 0.2339 - acc: 0.8989
Epoch 883/1500
 - 2s - loss: 0.2342 - acc: 0.8983
Epoch 884/1500
 - 3s - loss: 0.2335 - acc: 0.8985
Epoch 885/1500
 - 2s - loss: 0.2337 - acc: 0.8985
Epoch 886/1500
 - 2s - loss: 0.2344 - acc: 0.8985
Epoch 887/1500
 - 2s - loss: 0.2344 - acc: 0.8982
Epoch 888/1500
 - 2s - loss: 0.2336 - acc: 0.8985
Epoch 889/1500
 - 2s - loss: 0.2334 - acc: 0.8986
Epoch 890/1500
 - 3s - loss: 0.2331 - acc: 0.8989
Epoch 891/1500
 - 3s - loss: 0.2340 - acc: 0.8984
Epoch 892/1500
 - 2s - loss: 0.2343 - acc: 0.8982
Epoch 893/1500
 - 2s - loss: 0.2336 - acc: 0.8984
Epoch 894/1500
 - 3s - loss: 0.2333 - acc: 0.8986
Epoch 895/1500
 - 2s - loss: 0.2335 - acc: 0.8984
Epoch 896/1500
 - 2s - loss: 0.2328 - acc: 0.8987
Epoch 897/1500
 - 3s - loss: 0.2340 - acc: 0.8985
Epoch 898/1500
 - 2s - loss: 0.2336 - acc: 0.8984
Epoch 899/1500
 - 2s - loss: 0.2341 - acc: 0.8983
Epoch 900/1500
 - 2s - loss: 0.2338 - acc: 0.8985
Epoch 901/1500
 - 2s - loss: 0.2330 - acc: 0.8989
Epoch 902/1500
 - 2s - loss: 0.2334 - acc: 0.8987
Epoch 903/1500
 - 3s - loss: 0.2334 - acc: 0.8987
Epoch 904/1500
 - 3s - loss: 0.2335 - acc: 0.8988
Epoch 905/1500
 - 2s - loss: 0.2342 - acc: 0.8986
Epoch 906/1500
 - 2s - loss: 0.2329 - acc: 0.8992
Epoch 907/1500
 - 2s - loss: 0.2346 - acc: 0.8981
Epoch 908/1500
 - 2s - loss: 0.2338 - acc: 0.8985
Epoch 909/1500
 - 3s - loss: 0.2332 - acc: 0.8985
Epoch 910/1500
 - 3s - loss: 0.2350 - acc: 0.8984
Epoch 911/1500
 - 2s - loss: 0.2332 - acc: 0.8985
Epoch 912/1500
 - 2s - loss: 0.2331 - acc: 0.8986
Epoch 913/1500
 - 2s - loss: 0.2328 - acc: 0.8987
Epoch 914/1500
 - 2s - loss: 0.2333 - acc: 0.8983
Epoch 915/1500
 - 2s - loss: 0.2329 - acc: 0.8991
Epoch 916/1500
 - 3s - loss: 0.2333 - acc: 0.8987
Epoch 917/1500
 - 2s - loss: 0.2333 - acc: 0.8987
Epoch 918/1500
 - 2s - loss: 0.2337 - acc: 0.8984
Epoch 919/1500
 - 2s - loss: 0.2324 - acc: 0.8993
Epoch 920/1500
 - 2s - loss: 0.2337 - acc: 0.8985
Epoch 921/1500
 - 2s - loss: 0.2334 - acc: 0.8985
Epoch 922/1500
 - 2s - loss: 0.2327 - acc: 0.8987
Epoch 923/1500
 - 3s - loss: 0.2330 - acc: 0.8984
Epoch 924/1500
 - 2s - loss: 0.2336 - acc: 0.8985
Epoch 925/1500
 - 2s - loss: 0.2334 - acc: 0.8989
Epoch 926/1500
 - 2s - loss: 0.2329 - acc: 0.8989
Epoch 927/1500
 - 2s - loss: 0.2327 - acc: 0.8990
Epoch 928/1500
 - 2s - loss: 0.2329 - acc: 0.8987
Epoch 929/1500
 - 3s - loss: 0.2335 - acc: 0.8984
Epoch 930/1500
 - 2s - loss: 0.2330 - acc: 0.8988
Epoch 931/1500
 - 3s - loss: 0.2330 - acc: 0.8986
Epoch 932/1500
 - 2s - loss: 0.2330 - acc: 0.8992
Epoch 933/1500
 - 3s - loss: 0.2338 - acc: 0.8987
Epoch 934/1500
 - 3s - loss: 0.2325 - acc: 0.8987
Epoch 935/1500
 - 3s - loss: 0.2330 - acc: 0.8989
Epoch 936/1500
 - 2s - loss: 0.2328 - acc: 0.8989
Epoch 937/1500
 - 2s - loss: 0.2329 - acc: 0.8987
Epoch 938/1500
 - 2s - loss: 0.2326 - acc: 0.8989
Epoch 939/1500
 - 2s - loss: 0.2334 - acc: 0.8985
Epoch 940/1500
 - 2s - loss: 0.2321 - acc: 0.8991
Epoch 941/1500
 - 2s - loss: 0.2329 - acc: 0.8984
Epoch 942/1500
 - 3s - loss: 0.2325 - acc: 0.8990
Epoch 943/1500
 - 2s - loss: 0.2333 - acc: 0.8986
Epoch 944/1500
 - 2s - loss: 0.2331 - acc: 0.8988
Epoch 945/1500
 - 2s - loss: 0.2327 - acc: 0.8988
Epoch 946/1500
 - 2s - loss: 0.2325 - acc: 0.8987
Epoch 947/1500
 - 2s - loss: 0.2327 - acc: 0.8989
Epoch 948/1500
 - 3s - loss: 0.2329 - acc: 0.8984
Epoch 949/1500
 - 2s - loss: 0.2320 - acc: 0.8991
Epoch 950/1500
 - 2s - loss: 0.2329 - acc: 0.8989
Epoch 951/1500
 - 2s - loss: 0.2327 - acc: 0.8992
Epoch 952/1500
 - 3s - loss: 0.2326 - acc: 0.8989
Epoch 953/1500
 - 3s - loss: 0.2323 - acc: 0.8990
Epoch 954/1500
 - 3s - loss: 0.2330 - acc: 0.8988
Epoch 955/1500
 - 3s - loss: 0.2326 - acc: 0.8988
Epoch 956/1500
 - 3s - loss: 0.2320 - acc: 0.8994
Epoch 957/1500
 - 2s - loss: 0.2333 - acc: 0.8988
Epoch 958/1500
 - 2s - loss: 0.2322 - acc: 0.8989
Epoch 959/1500
 - 2s - loss: 0.2322 - acc: 0.8989
Epoch 960/1500
 - 3s - loss: 0.2330 - acc: 0.8990
Epoch 961/1500
 - 3s - loss: 0.2321 - acc: 0.8991
Epoch 962/1500
 - 2s - loss: 0.2328 - acc: 0.8987
Epoch 963/1500
 - 2s - loss: 0.2324 - acc: 0.8992
Epoch 964/1500
 - 2s - loss: 0.2329 - acc: 0.8988
Epoch 965/1500
 - 2s - loss: 0.2331 - acc: 0.8986
Epoch 966/1500
 - 2s - loss: 0.2330 - acc: 0.8989
Epoch 967/1500
 - 3s - loss: 0.2324 - acc: 0.8990
Epoch 968/1500
 - 3s - loss: 0.2323 - acc: 0.8994
Epoch 969/1500
 - 3s - loss: 0.2320 - acc: 0.8990
Epoch 970/1500
 - 3s - loss: 0.2323 - acc: 0.8988
Epoch 971/1500
 - 2s - loss: 0.2322 - acc: 0.8994
Epoch 972/1500
 - 2s - loss: 0.2325 - acc: 0.8992
Epoch 973/1500
 - 3s - loss: 0.2315 - acc: 0.8994
Epoch 974/1500
 - 3s - loss: 0.2326 - acc: 0.8993
Epoch 975/1500
 - 3s - loss: 0.2320 - acc: 0.8991
Epoch 976/1500
 - 3s - loss: 0.2325 - acc: 0.8990
Epoch 977/1500
 - 3s - loss: 0.2332 - acc: 0.8988
Epoch 978/1500
 - 3s - loss: 0.2325 - acc: 0.8990
Epoch 979/1500
 - 3s - loss: 0.2322 - acc: 0.8995
Epoch 980/1500
 - 3s - loss: 0.2320 - acc: 0.8991
Epoch 981/1500
 - 3s - loss: 0.2327 - acc: 0.8989
Epoch 982/1500
 - 2s - loss: 0.2324 - acc: 0.8991
Epoch 983/1500
 - 2s - loss: 0.2325 - acc: 0.8990
Epoch 984/1500
 - 2s - loss: 0.2329 - acc: 0.8991
Epoch 985/1500
 - 3s - loss: 0.2332 - acc: 0.8986
Epoch 986/1500
 - 2s - loss: 0.2320 - acc: 0.8994
Epoch 987/1500
 - 2s - loss: 0.2315 - acc: 0.8993
Epoch 988/1500
 - 2s - loss: 0.2325 - acc: 0.8990
Epoch 989/1500
 - 2s - loss: 0.2322 - acc: 0.8991
Epoch 990/1500
 - 2s - loss: 0.2318 - acc: 0.8990
Epoch 991/1500
 - 2s - loss: 0.2318 - acc: 0.8994
Epoch 992/1500
 - 3s - loss: 0.2315 - acc: 0.8997
Epoch 993/1500
 - 2s - loss: 0.2319 - acc: 0.8994
Epoch 994/1500
 - 2s - loss: 0.2310 - acc: 0.8996
Epoch 995/1500
 - 2s - loss: 0.2315 - acc: 0.8994
Epoch 996/1500
 - 2s - loss: 0.2319 - acc: 0.8993
Epoch 997/1500
 - 2s - loss: 0.2323 - acc: 0.8989
Epoch 998/1500
 - 3s - loss: 0.2318 - acc: 0.8992
Epoch 999/1500
 - 2s - loss: 0.2313 - acc: 0.8992
Epoch 1000/1500
 - 2s - loss: 0.2321 - acc: 0.8988
Epoch 1001/1500
 - 2s - loss: 0.2328 - acc: 0.8990
Epoch 1002/1500
 - 2s - loss: 0.2323 - acc: 0.8992
Epoch 1003/1500
 - 2s - loss: 0.2319 - acc: 0.8992
Epoch 1004/1500
 - 2s - loss: 0.2322 - acc: 0.8991
Epoch 1005/1500
 - 3s - loss: 0.2319 - acc: 0.8991
Epoch 1006/1500
 - 2s - loss: 0.2320 - acc: 0.8991
Epoch 1007/1500
 - 2s - loss: 0.2320 - acc: 0.8991
Epoch 1008/1500
 - 2s - loss: 0.2318 - acc: 0.8991
Epoch 1009/1500
 - 2s - loss: 0.2314 - acc: 0.8994
Epoch 1010/1500
 - 2s - loss: 0.2313 - acc: 0.8993
Epoch 1011/1500
 - 3s - loss: 0.2316 - acc: 0.8992
Epoch 1012/1500
 - 2s - loss: 0.2317 - acc: 0.8992
Epoch 1013/1500
 - 2s - loss: 0.2311 - acc: 0.8994
Epoch 1014/1500
 - 2s - loss: 0.2316 - acc: 0.8992
Epoch 1015/1500
 - 2s - loss: 0.2316 - acc: 0.8989
Epoch 1016/1500
 - 2s - loss: 0.2318 - acc: 0.8991
Epoch 1017/1500
 - 2s - loss: 0.2326 - acc: 0.8989
Epoch 1018/1500
 - 3s - loss: 0.2314 - acc: 0.8993
Epoch 1019/1500
 - 2s - loss: 0.2311 - acc: 0.8994
Epoch 1020/1500
 - 2s - loss: 0.2321 - acc: 0.8993
Epoch 1021/1500
 - 2s - loss: 0.2322 - acc: 0.8993
Epoch 1022/1500
 - 2s - loss: 0.2326 - acc: 0.8995
Epoch 1023/1500
 - 2s - loss: 0.2314 - acc: 0.8991
Epoch 1024/1500
 - 3s - loss: 0.2319 - acc: 0.8992
Epoch 1025/1500
 - 2s - loss: 0.2309 - acc: 0.8997
Epoch 1026/1500
 - 2s - loss: 0.2305 - acc: 0.8998
Epoch 1027/1500
 - 3s - loss: 0.2331 - acc: 0.8990
Epoch 1028/1500
 - 3s - loss: 0.2316 - acc: 0.8995
Epoch 1029/1500
 - 3s - loss: 0.2315 - acc: 0.8995
Epoch 1030/1500
 - 3s - loss: 0.2318 - acc: 0.8991
Epoch 1031/1500
 - 3s - loss: 0.2315 - acc: 0.8990
Epoch 1032/1500
 - 2s - loss: 0.2307 - acc: 0.8997
Epoch 1033/1500
 - 2s - loss: 0.2318 - acc: 0.8995
Epoch 1034/1500
 - 2s - loss: 0.2311 - acc: 0.8993
Epoch 1035/1500
 - 2s - loss: 0.2322 - acc: 0.8991
Epoch 1036/1500
 - 3s - loss: 0.2309 - acc: 0.8998
Epoch 1037/1500
 - 3s - loss: 0.2316 - acc: 0.8996
Epoch 1038/1500
 - 2s - loss: 0.2317 - acc: 0.8995
Epoch 1039/1500
 - 2s - loss: 0.2317 - acc: 0.8999
Epoch 1040/1500
 - 3s - loss: 0.2317 - acc: 0.8993
Epoch 1041/1500
 - 2s - loss: 0.2313 - acc: 0.8996
Epoch 1042/1500
 - 2s - loss: 0.2309 - acc: 0.8998
Epoch 1043/1500
 - 3s - loss: 0.2316 - acc: 0.8996
Epoch 1044/1500
 - 2s - loss: 0.2325 - acc: 0.8988
Epoch 1045/1500
 - 2s - loss: 0.2305 - acc: 0.9000
Epoch 1046/1500
 - 2s - loss: 0.2319 - acc: 0.8996
Epoch 1047/1500
 - 2s - loss: 0.2324 - acc: 0.8993
Epoch 1048/1500
 - 3s - loss: 0.2317 - acc: 0.8995
Epoch 1049/1500
 - 3s - loss: 0.2314 - acc: 0.8994
Epoch 1050/1500
 - 2s - loss: 0.2311 - acc: 0.8994
Epoch 1051/1500
 - 2s - loss: 0.2312 - acc: 0.8995
Epoch 1052/1500
 - 3s - loss: 0.2310 - acc: 0.8995
Epoch 1053/1500
 - 3s - loss: 0.2315 - acc: 0.8994
Epoch 1054/1500
 - 2s - loss: 0.2322 - acc: 0.8990
Epoch 1055/1500
 - 2s - loss: 0.2306 - acc: 0.9000
Epoch 1056/1500
 - 3s - loss: 0.2313 - acc: 0.8995
Epoch 1057/1500
 - 2s - loss: 0.2305 - acc: 0.8993
Epoch 1058/1500
 - 2s - loss: 0.2310 - acc: 0.8995
Epoch 1059/1500
 - 2s - loss: 0.2326 - acc: 0.8992
Epoch 1060/1500
 - 2s - loss: 0.2318 - acc: 0.8995
Epoch 1061/1500
 - 2s - loss: 0.2306 - acc: 0.8994
Epoch 1062/1500
 - 3s - loss: 0.2314 - acc: 0.8993
Epoch 1063/1500
 - 2s - loss: 0.2308 - acc: 0.8998
Epoch 1064/1500
 - 2s - loss: 0.2316 - acc: 0.8995
Epoch 1065/1500
 - 3s - loss: 0.2306 - acc: 0.9000
Epoch 1066/1500
 - 3s - loss: 0.2303 - acc: 0.8998
Epoch 1067/1500
 - 2s - loss: 0.2310 - acc: 0.8994
Epoch 1068/1500
 - 3s - loss: 0.2306 - acc: 0.8998
Epoch 1069/1500
 - 3s - loss: 0.2306 - acc: 0.8997
Epoch 1070/1500
 - 2s - loss: 0.2313 - acc: 0.8991
Epoch 1071/1500
 - 2s - loss: 0.2303 - acc: 0.8999
Epoch 1072/1500
 - 2s - loss: 0.2305 - acc: 0.8995
Epoch 1073/1500
 - 3s - loss: 0.2305 - acc: 0.8996
Epoch 1074/1500
 - 3s - loss: 0.2320 - acc: 0.8994
Epoch 1075/1500
 - 3s - loss: 0.2305 - acc: 0.8996
Epoch 1076/1500
 - 2s - loss: 0.2316 - acc: 0.8992
Epoch 1077/1500
 - 2s - loss: 0.2311 - acc: 0.8996
Epoch 1078/1500
 - 3s - loss: 0.2309 - acc: 0.8996
Epoch 1079/1500
 - 3s - loss: 0.2309 - acc: 0.8997
Epoch 1080/1500
 - 3s - loss: 0.2303 - acc: 0.8998
Epoch 1081/1500
 - 3s - loss: 0.2307 - acc: 0.8999
Epoch 1082/1500
 - 2s - loss: 0.2305 - acc: 0.8995
Epoch 1083/1500
 - 3s - loss: 0.2318 - acc: 0.8995
Epoch 1084/1500
 - 3s - loss: 0.2302 - acc: 0.9000
Epoch 1085/1500
 - 2s - loss: 0.2303 - acc: 0.8999
Epoch 1086/1500
 - 3s - loss: 0.2312 - acc: 0.8998
Epoch 1087/1500
 - 3s - loss: 0.2310 - acc: 0.8995
Epoch 1088/1500
 - 2s - loss: 0.2300 - acc: 0.9001
Epoch 1089/1500
 - 2s - loss: 0.2316 - acc: 0.8993
Epoch 1090/1500
 - 2s - loss: 0.2314 - acc: 0.8995
Epoch 1091/1500
 - 2s - loss: 0.2308 - acc: 0.8998
Epoch 1092/1500
 - 2s - loss: 0.2311 - acc: 0.8997
Epoch 1093/1500
 - 3s - loss: 0.2303 - acc: 0.8999
Epoch 1094/1500
 - 3s - loss: 0.2305 - acc: 0.8997
Epoch 1095/1500
 - 2s - loss: 0.2308 - acc: 0.9001
Epoch 1096/1500
 - 2s - loss: 0.2303 - acc: 0.9002
Epoch 1097/1500
 - 2s - loss: 0.2306 - acc: 0.8999
Epoch 1098/1500
 - 2s - loss: 0.2298 - acc: 0.8999
Epoch 1099/1500
 - 2s - loss: 0.2306 - acc: 0.8998
Epoch 1100/1500
 - 3s - loss: 0.2299 - acc: 0.8999
Epoch 1101/1500
 - 2s - loss: 0.2305 - acc: 0.8995
Epoch 1102/1500
 - 3s - loss: 0.2311 - acc: 0.8997
Epoch 1103/1500
 - 3s - loss: 0.2301 - acc: 0.8998
Epoch 1104/1500
 - 3s - loss: 0.2311 - acc: 0.8993
Epoch 1105/1500
 - 3s - loss: 0.2308 - acc: 0.8997
Epoch 1106/1500
 - 3s - loss: 0.2305 - acc: 0.8994
Epoch 1107/1500
 - 3s - loss: 0.2305 - acc: 0.8997
Epoch 1108/1500
 - 3s - loss: 0.2297 - acc: 0.8997
Epoch 1109/1500
 - 2s - loss: 0.2311 - acc: 0.8992
Epoch 1110/1500
 - 2s - loss: 0.2303 - acc: 0.8996
Epoch 1111/1500
 - 3s - loss: 0.2307 - acc: 0.8995
Epoch 1112/1500
 - 3s - loss: 0.2301 - acc: 0.8996
Epoch 1113/1500
 - 3s - loss: 0.2304 - acc: 0.8995
Epoch 1114/1500
 - 2s - loss: 0.2312 - acc: 0.8999
Epoch 1115/1500
 - 2s - loss: 0.2300 - acc: 0.9000
Epoch 1116/1500
 - 2s - loss: 0.2313 - acc: 0.8994
Epoch 1117/1500
 - 2s - loss: 0.2300 - acc: 0.9002
Epoch 1118/1500
 - 2s - loss: 0.2304 - acc: 0.8996
Epoch 1119/1500
 - 3s - loss: 0.2316 - acc: 0.8995
Epoch 1120/1500
 - 2s - loss: 0.2307 - acc: 0.8996
Epoch 1121/1500
 - 2s - loss: 0.2302 - acc: 0.8999
Epoch 1122/1500
 - 2s - loss: 0.2300 - acc: 0.8997
Epoch 1123/1500
 - 2s - loss: 0.2307 - acc: 0.8994
Epoch 1124/1500
 - 2s - loss: 0.2302 - acc: 0.8995
Epoch 1125/1500
 - 3s - loss: 0.2305 - acc: 0.8996
Epoch 1126/1500
 - 2s - loss: 0.2306 - acc: 0.8998
Epoch 1127/1500
 - 2s - loss: 0.2309 - acc: 0.8996
Epoch 1128/1500
 - 2s - loss: 0.2308 - acc: 0.8996
Epoch 1129/1500
 - 3s - loss: 0.2299 - acc: 0.8995
Epoch 1130/1500
 - 3s - loss: 0.2310 - acc: 0.8996
Epoch 1131/1500
 - 2s - loss: 0.2304 - acc: 0.8999
Epoch 1132/1500
 - 3s - loss: 0.2301 - acc: 0.9000
Epoch 1133/1500
 - 2s - loss: 0.2298 - acc: 0.8999
Epoch 1134/1500
 - 2s - loss: 0.2304 - acc: 0.8998
Epoch 1135/1500
 - 2s - loss: 0.2297 - acc: 0.9000
Epoch 1136/1500
 - 2s - loss: 0.2303 - acc: 0.8998
Epoch 1137/1500
 - 2s - loss: 0.2312 - acc: 0.8995
Epoch 1138/1500
 - 3s - loss: 0.2307 - acc: 0.8997
Epoch 1139/1500
 - 2s - loss: 0.2309 - acc: 0.8994
Epoch 1140/1500
 - 2s - loss: 0.2311 - acc: 0.8994
Epoch 1141/1500
 - 2s - loss: 0.2309 - acc: 0.8998
Epoch 1142/1500
 - 2s - loss: 0.2314 - acc: 0.8993
Epoch 1143/1500
 - 2s - loss: 0.2303 - acc: 0.8994
Epoch 1144/1500
 - 2s - loss: 0.2306 - acc: 0.8994
Epoch 1145/1500
 - 3s - loss: 0.2301 - acc: 0.8998
Epoch 1146/1500
 - 2s - loss: 0.2308 - acc: 0.8994
Epoch 1147/1500
 - 2s - loss: 0.2306 - acc: 0.8996
Epoch 1148/1500
 - 2s - loss: 0.2314 - acc: 0.8991
Epoch 1149/1500
 - 2s - loss: 0.2303 - acc: 0.8994
Epoch 1150/1500
 - 2s - loss: 0.2304 - acc: 0.8992
Epoch 1151/1500
 - 3s - loss: 0.2305 - acc: 0.8996
Epoch 1152/1500
 - 2s - loss: 0.2307 - acc: 0.8998
Epoch 1153/1500
 - 2s - loss: 0.2307 - acc: 0.8994
Epoch 1154/1500
 - 2s - loss: 0.2300 - acc: 0.9000
Epoch 1155/1500
 - 2s - loss: 0.2301 - acc: 0.9000
Epoch 1156/1500
 - 2s - loss: 0.2298 - acc: 0.8995
Epoch 1157/1500
 - 3s - loss: 0.2307 - acc: 0.8996
Epoch 1158/1500
 - 3s - loss: 0.2308 - acc: 0.8992
Epoch 1159/1500
 - 2s - loss: 0.2294 - acc: 0.9000
Epoch 1160/1500
 - 2s - loss: 0.2299 - acc: 0.8998
Epoch 1161/1500
 - 2s - loss: 0.2297 - acc: 0.8999
Epoch 1162/1500
 - 2s - loss: 0.2298 - acc: 0.8996
Epoch 1163/1500
 - 2s - loss: 0.2308 - acc: 0.8997
Epoch 1164/1500
 - 3s - loss: 0.2297 - acc: 0.8999
Epoch 1165/1500
 - 3s - loss: 0.2298 - acc: 0.9001
Epoch 1166/1500
 - 3s - loss: 0.2302 - acc: 0.8993
Epoch 1167/1500
 - 2s - loss: 0.2294 - acc: 0.9001
Epoch 1168/1500
 - 2s - loss: 0.2296 - acc: 0.9003
Epoch 1169/1500
 - 2s - loss: 0.2299 - acc: 0.8997
Epoch 1170/1500
 - 3s - loss: 0.2304 - acc: 0.8999
Epoch 1171/1500
 - 2s - loss: 0.2304 - acc: 0.8996
Epoch 1172/1500
 - 2s - loss: 0.2298 - acc: 0.9001
Epoch 1173/1500
 - 2s - loss: 0.2297 - acc: 0.8999
Epoch 1174/1500
 - 2s - loss: 0.2301 - acc: 0.9001
Epoch 1175/1500
 - 3s - loss: 0.2303 - acc: 0.8996
Epoch 1176/1500
 - 3s - loss: 0.2292 - acc: 0.8997
Epoch 1177/1500
 - 3s - loss: 0.2299 - acc: 0.8996
Epoch 1178/1500
 - 3s - loss: 0.2297 - acc: 0.8999
Epoch 1179/1500
 - 3s - loss: 0.2302 - acc: 0.8999
Epoch 1180/1500
 - 3s - loss: 0.2309 - acc: 0.8992
Epoch 1181/1500
 - 3s - loss: 0.2302 - acc: 0.8999
Epoch 1182/1500
 - 3s - loss: 0.2301 - acc: 0.8996
Epoch 1183/1500
 - 3s - loss: 0.2297 - acc: 0.8999
Epoch 1184/1500
 - 2s - loss: 0.2300 - acc: 0.9000
Epoch 1185/1500
 - 2s - loss: 0.2301 - acc: 0.8998
Epoch 1186/1500
 - 3s - loss: 0.2303 - acc: 0.8994
Epoch 1187/1500
 - 3s - loss: 0.2302 - acc: 0.8998
Epoch 1188/1500
 - 3s - loss: 0.2302 - acc: 0.8995
Epoch 1189/1500
 - 3s - loss: 0.2308 - acc: 0.8994
Epoch 1190/1500
 - 2s - loss: 0.2298 - acc: 0.8998
Epoch 1191/1500
 - 2s - loss: 0.2297 - acc: 0.8998
Epoch 1192/1500
 - 3s - loss: 0.2294 - acc: 0.9002
Epoch 1193/1500
 - 2s - loss: 0.2295 - acc: 0.9001
Epoch 1194/1500
 - 2s - loss: 0.2296 - acc: 0.8998
Epoch 1195/1500
 - 3s - loss: 0.2309 - acc: 0.8997
Epoch 1196/1500
 - 3s - loss: 0.2306 - acc: 0.8997
Epoch 1197/1500
 - 2s - loss: 0.2302 - acc: 0.8997
Epoch 1198/1500
 - 2s - loss: 0.2304 - acc: 0.8995
Epoch 1199/1500
 - 3s - loss: 0.2296 - acc: 0.8996
Epoch 1200/1500
 - 2s - loss: 0.2307 - acc: 0.8995
Epoch 1201/1500
 - 2s - loss: 0.2293 - acc: 0.9001
Epoch 1202/1500
 - 3s - loss: 0.2304 - acc: 0.8997
Epoch 1203/1500
 - 2s - loss: 0.2303 - acc: 0.8995
Epoch 1204/1500
 - 2s - loss: 0.2301 - acc: 0.8998
Epoch 1205/1500
 - 2s - loss: 0.2293 - acc: 0.8999
Epoch 1206/1500
 - 2s - loss: 0.2302 - acc: 0.8994
Epoch 1207/1500
 - 3s - loss: 0.2293 - acc: 0.9004
Epoch 1208/1500
 - 3s - loss: 0.2300 - acc: 0.8998
Epoch 1209/1500
 - 2s - loss: 0.2300 - acc: 0.8997
Epoch 1210/1500
 - 2s - loss: 0.2305 - acc: 0.8997
Epoch 1211/1500
 - 2s - loss: 0.2297 - acc: 0.8998
Epoch 1212/1500
 - 2s - loss: 0.2305 - acc: 0.8997
Epoch 1213/1500
 - 2s - loss: 0.2298 - acc: 0.8998
Epoch 1214/1500
 - 3s - loss: 0.2297 - acc: 0.8998
Epoch 1215/1500
 - 2s - loss: 0.2297 - acc: 0.8999
Epoch 1216/1500
 - 2s - loss: 0.2293 - acc: 0.8998
Epoch 1217/1500
 - 2s - loss: 0.2298 - acc: 0.8995
Epoch 1218/1500
 - 2s - loss: 0.2289 - acc: 0.9003
Epoch 1219/1500
 - 2s - loss: 0.2294 - acc: 0.9001
Epoch 1220/1500
 - 2s - loss: 0.2302 - acc: 0.8994
Epoch 1221/1500
 - 3s - loss: 0.2299 - acc: 0.8998
Epoch 1222/1500
 - 2s - loss: 0.2298 - acc: 0.8998
Epoch 1223/1500
 - 2s - loss: 0.2295 - acc: 0.9001
Epoch 1224/1500
 - 2s - loss: 0.2297 - acc: 0.8996
Epoch 1225/1500
 - 2s - loss: 0.2297 - acc: 0.8996
Epoch 1226/1500
 - 2s - loss: 0.2290 - acc: 0.9002
Epoch 1227/1500
 - 3s - loss: 0.2297 - acc: 0.8998
Epoch 1228/1500
 - 2s - loss: 0.2298 - acc: 0.9000
Epoch 1229/1500
 - 2s - loss: 0.2293 - acc: 0.9001
Epoch 1230/1500
 - 2s - loss: 0.2306 - acc: 0.8995
Epoch 1231/1500
 - 2s - loss: 0.2291 - acc: 0.9003
Epoch 1232/1500
 - 3s - loss: 0.2300 - acc: 0.8996
Epoch 1233/1500
 - 3s - loss: 0.2300 - acc: 0.8998
Epoch 1234/1500
 - 3s - loss: 0.2293 - acc: 0.8997
Epoch 1235/1500
 - 2s - loss: 0.2298 - acc: 0.8999
Epoch 1236/1500
 - 2s - loss: 0.2300 - acc: 0.8994
Epoch 1237/1500
 - 3s - loss: 0.2294 - acc: 0.8999
Epoch 1238/1500
 - 2s - loss: 0.2304 - acc: 0.8997
Epoch 1239/1500
 - 2s - loss: 0.2302 - acc: 0.8994
Epoch 1240/1500
 - 3s - loss: 0.2296 - acc: 0.8994
Epoch 1241/1500
 - 2s - loss: 0.2293 - acc: 0.9000
Epoch 1242/1500
 - 2s - loss: 0.2297 - acc: 0.8999
Epoch 1243/1500
 - 2s - loss: 0.2297 - acc: 0.8995
Epoch 1244/1500
 - 2s - loss: 0.2293 - acc: 0.8996
Epoch 1245/1500
 - 2s - loss: 0.2296 - acc: 0.9000
Epoch 1246/1500
 - 3s - loss: 0.2303 - acc: 0.8995
Epoch 1247/1500
 - 2s - loss: 0.2292 - acc: 0.9002
Epoch 1248/1500
 - 2s - loss: 0.2296 - acc: 0.8999
Epoch 1249/1500
 - 2s - loss: 0.2292 - acc: 0.8998
Epoch 1250/1500
 - 2s - loss: 0.2302 - acc: 0.8996
Epoch 1251/1500
 - 2s - loss: 0.2299 - acc: 0.8995
Epoch 1252/1500
 - 2s - loss: 0.2297 - acc: 0.9000
Epoch 1253/1500
 - 3s - loss: 0.2294 - acc: 0.9001
Epoch 1254/1500
 - 3s - loss: 0.2299 - acc: 0.9001
Epoch 1255/1500
 - 3s - loss: 0.2306 - acc: 0.8994
Epoch 1256/1500
 - 3s - loss: 0.2305 - acc: 0.8993
Epoch 1257/1500
 - 3s - loss: 0.2294 - acc: 0.8999
Epoch 1258/1500
 - 2s - loss: 0.2302 - acc: 0.8997
Epoch 1259/1500
 - 3s - loss: 0.2301 - acc: 0.8998
Epoch 1260/1500
 - 2s - loss: 0.2297 - acc: 0.8999
Epoch 1261/1500
 - 2s - loss: 0.2300 - acc: 0.9000
Epoch 1262/1500
 - 2s - loss: 0.2300 - acc: 0.8997
Epoch 1263/1500
 - 2s - loss: 0.2306 - acc: 0.8998
Epoch 1264/1500
 - 2s - loss: 0.2307 - acc: 0.8995
Epoch 1265/1500
 - 3s - loss: 0.2293 - acc: 0.9001
Epoch 1266/1500
 - 2s - loss: 0.2303 - acc: 0.8996
Epoch 1267/1500
 - 3s - loss: 0.2305 - acc: 0.8998
Epoch 1268/1500
 - 2s - loss: 0.2291 - acc: 0.9001
Epoch 1269/1500
 - 2s - loss: 0.2299 - acc: 0.8996
Epoch 1270/1500
 - 2s - loss: 0.2296 - acc: 0.8995
Epoch 1271/1500
 - 2s - loss: 0.2296 - acc: 0.9001
Epoch 1272/1500
 - 3s - loss: 0.2300 - acc: 0.9000
Epoch 1273/1500
 - 2s - loss: 0.2295 - acc: 0.8998
Epoch 1274/1500
 - 2s - loss: 0.2292 - acc: 0.9002
Epoch 1275/1500
 - 2s - loss: 0.2291 - acc: 0.9002
Epoch 1276/1500
 - 2s - loss: 0.2296 - acc: 0.9002
Epoch 1277/1500
 - 2s - loss: 0.2295 - acc: 0.8997
Epoch 1278/1500
 - 3s - loss: 0.2295 - acc: 0.8996
Epoch 1279/1500
 - 2s - loss: 0.2289 - acc: 0.9002
Epoch 1280/1500
 - 3s - loss: 0.2295 - acc: 0.8998
Epoch 1281/1500
 - 3s - loss: 0.2304 - acc: 0.8998
Epoch 1282/1500
 - 2s - loss: 0.2295 - acc: 0.9003
Epoch 1283/1500
 - 2s - loss: 0.2299 - acc: 0.8999
Epoch 1284/1500
 - 3s - loss: 0.2296 - acc: 0.9000
Epoch 1285/1500
 - 2s - loss: 0.2291 - acc: 0.9000
Epoch 1286/1500
 - 2s - loss: 0.2306 - acc: 0.8993
Epoch 1287/1500
 - 3s - loss: 0.2294 - acc: 0.8999
Epoch 1288/1500
 - 2s - loss: 0.2291 - acc: 0.8999
Epoch 1289/1500
 - 2s - loss: 0.2294 - acc: 0.9001
Epoch 1290/1500
 - 2s - loss: 0.2300 - acc: 0.8998
Epoch 1291/1500
 - 3s - loss: 0.2299 - acc: 0.8998
Epoch 1292/1500
 - 2s - loss: 0.2286 - acc: 0.9003
Epoch 1293/1500
 - 2s - loss: 0.2298 - acc: 0.8998
Epoch 1294/1500
 - 2s - loss: 0.2292 - acc: 0.9002
Epoch 1295/1500
 - 2s - loss: 0.2301 - acc: 0.8999
Epoch 1296/1500
 - 2s - loss: 0.2299 - acc: 0.8997
Epoch 1297/1500
 - 3s - loss: 0.2288 - acc: 0.9001
Epoch 1298/1500
 - 2s - loss: 0.2288 - acc: 0.9000
Epoch 1299/1500
 - 2s - loss: 0.2296 - acc: 0.8996
Epoch 1300/1500
 - 2s - loss: 0.2297 - acc: 0.8997
Epoch 1301/1500
 - 2s - loss: 0.2301 - acc: 0.8998
Epoch 1302/1500
 - 3s - loss: 0.2295 - acc: 0.8997
Epoch 1303/1500
 - 2s - loss: 0.2290 - acc: 0.9001
Epoch 1304/1500
 - 3s - loss: 0.2298 - acc: 0.8999
Epoch 1305/1500
 - 2s - loss: 0.2285 - acc: 0.9004
Epoch 1306/1500
 - 2s - loss: 0.2300 - acc: 0.8996
Epoch 1307/1500
 - 2s - loss: 0.2295 - acc: 0.8997
Epoch 1308/1500
 - 2s - loss: 0.2294 - acc: 0.9000
Epoch 1309/1500
 - 2s - loss: 0.2293 - acc: 0.8999
Epoch 1310/1500
 - 3s - loss: 0.2290 - acc: 0.9001
Epoch 1311/1500
 - 2s - loss: 0.2296 - acc: 0.9001
Epoch 1312/1500
 - 2s - loss: 0.2287 - acc: 0.8999
Epoch 1313/1500
 - 3s - loss: 0.2293 - acc: 0.9002
Epoch 1314/1500
 - 3s - loss: 0.2295 - acc: 0.8997
Epoch 1315/1500
 - 2s - loss: 0.2292 - acc: 0.8999
Epoch 1316/1500
 - 3s - loss: 0.2302 - acc: 0.8998
Epoch 1317/1500
 - 3s - loss: 0.2288 - acc: 0.9003
Epoch 1318/1500
 - 2s - loss: 0.2291 - acc: 0.8999
Epoch 1319/1500
 - 2s - loss: 0.2287 - acc: 0.9000
Epoch 1320/1500
 - 2s - loss: 0.2290 - acc: 0.9000
Epoch 1321/1500
 - 2s - loss: 0.2292 - acc: 0.9003
Epoch 1322/1500
 - 3s - loss: 0.2293 - acc: 0.8999
Epoch 1323/1500
 - 3s - loss: 0.2288 - acc: 0.9001
Epoch 1324/1500
 - 2s - loss: 0.2291 - acc: 0.9003
Epoch 1325/1500
 - 2s - loss: 0.2288 - acc: 0.9002
Epoch 1326/1500
 - 2s - loss: 0.2299 - acc: 0.9001
Epoch 1327/1500
 - 2s - loss: 0.2290 - acc: 0.9002
Epoch 1328/1500
 - 2s - loss: 0.2296 - acc: 0.9003
Epoch 1329/1500
 - 3s - loss: 0.2287 - acc: 0.9001
Epoch 1330/1500
 - 3s - loss: 0.2297 - acc: 0.8998
Epoch 1331/1500
 - 3s - loss: 0.2298 - acc: 0.8999
Epoch 1332/1500
 - 3s - loss: 0.2294 - acc: 0.9001
Epoch 1333/1500
 - 3s - loss: 0.2297 - acc: 0.9002
Epoch 1334/1500
 - 2s - loss: 0.2311 - acc: 0.8993
Epoch 1335/1500
 - 3s - loss: 0.2284 - acc: 0.9003
Epoch 1336/1500
 - 2s - loss: 0.2296 - acc: 0.8997
Epoch 1337/1500
 - 2s - loss: 0.2297 - acc: 0.9000
Epoch 1338/1500
 - 2s - loss: 0.2293 - acc: 0.9001
Epoch 1339/1500
 - 2s - loss: 0.2291 - acc: 0.9002
Epoch 1340/1500
 - 2s - loss: 0.2290 - acc: 0.8999
Epoch 1341/1500
 - 2s - loss: 0.2287 - acc: 0.9003
Epoch 1342/1500
 - 3s - loss: 0.2291 - acc: 0.8998
Epoch 1343/1500
 - 2s - loss: 0.2289 - acc: 0.8998
Epoch 1344/1500
 - 2s - loss: 0.2288 - acc: 0.9002
Epoch 1345/1500
 - 2s - loss: 0.2292 - acc: 0.9000
Epoch 1346/1500
 - 2s - loss: 0.2293 - acc: 0.9000
Epoch 1347/1500
 - 2s - loss: 0.2288 - acc: 0.9001
Epoch 1348/1500
 - 3s - loss: 0.2289 - acc: 0.9003
Epoch 1349/1500
 - 2s - loss: 0.2287 - acc: 0.9005
Epoch 1350/1500
 - 2s - loss: 0.2294 - acc: 0.9000
Epoch 1351/1500
 - 3s - loss: 0.2288 - acc: 0.8999
Epoch 1352/1500
 - 2s - loss: 0.2287 - acc: 0.9004
Epoch 1353/1500
 - 3s - loss: 0.2291 - acc: 0.9001
Epoch 1354/1500
 - 2s - loss: 0.2289 - acc: 0.9001
Epoch 1355/1500
 - 3s - loss: 0.2297 - acc: 0.8997
Epoch 1356/1500
 - 2s - loss: 0.2289 - acc: 0.9001
Epoch 1357/1500
 - 2s - loss: 0.2292 - acc: 0.8995
Epoch 1358/1500
 - 2s - loss: 0.2293 - acc: 0.8999
Epoch 1359/1500
 - 2s - loss: 0.2295 - acc: 0.9001
Epoch 1360/1500
 - 2s - loss: 0.2292 - acc: 0.9001
Epoch 1361/1500
 - 3s - loss: 0.2283 - acc: 0.9002
Epoch 1362/1500
 - 3s - loss: 0.2289 - acc: 0.9001
Epoch 1363/1500
 - 3s - loss: 0.2293 - acc: 0.8996
Epoch 1364/1500
 - 3s - loss: 0.2288 - acc: 0.9002
Epoch 1365/1500
 - 3s - loss: 0.2286 - acc: 0.9001
Epoch 1366/1500
 - 2s - loss: 0.2285 - acc: 0.9001
Epoch 1367/1500
 - 3s - loss: 0.2291 - acc: 0.9001
Epoch 1368/1500
 - 2s - loss: 0.2291 - acc: 0.9001
Epoch 1369/1500
 - 2s - loss: 0.2294 - acc: 0.9002
Epoch 1370/1500
 - 2s - loss: 0.2287 - acc: 0.9001
Epoch 1371/1500
 - 2s - loss: 0.2299 - acc: 0.8995
Epoch 1372/1500
 - 2s - loss: 0.2303 - acc: 0.8998
Epoch 1373/1500
 - 2s - loss: 0.2287 - acc: 0.9002
Epoch 1374/1500
 - 3s - loss: 0.2287 - acc: 0.9004
Epoch 1375/1500
 - 2s - loss: 0.2291 - acc: 0.9001
Epoch 1376/1500
 - 2s - loss: 0.2297 - acc: 0.8998
Epoch 1377/1500
 - 2s - loss: 0.2282 - acc: 0.9004
Epoch 1378/1500
 - 2s - loss: 0.2292 - acc: 0.9002
Epoch 1379/1500
 - 2s - loss: 0.2287 - acc: 0.9004
Epoch 1380/1500
 - 3s - loss: 0.2290 - acc: 0.8997
Epoch 1381/1500
 - 2s - loss: 0.2304 - acc: 0.8995
Epoch 1382/1500
 - 2s - loss: 0.2291 - acc: 0.8999
Epoch 1383/1500
 - 2s - loss: 0.2289 - acc: 0.8998
Epoch 1384/1500
 - 3s - loss: 0.2290 - acc: 0.9003
Epoch 1385/1500
 - 3s - loss: 0.2291 - acc: 0.8998
Epoch 1386/1500
 - 3s - loss: 0.2288 - acc: 0.9002
Epoch 1387/1500
 - 3s - loss: 0.2284 - acc: 0.9005
Epoch 1388/1500
 - 2s - loss: 0.2288 - acc: 0.9001
Epoch 1389/1500
 - 2s - loss: 0.2298 - acc: 0.8998
Epoch 1390/1500
 - 2s - loss: 0.2287 - acc: 0.9000
Epoch 1391/1500
 - 2s - loss: 0.2295 - acc: 0.9003
Epoch 1392/1500
 - 2s - loss: 0.2292 - acc: 0.8999
Epoch 1393/1500
 - 3s - loss: 0.2289 - acc: 0.8999
Epoch 1394/1500
 - 2s - loss: 0.2285 - acc: 0.9001
Epoch 1395/1500
 - 2s - loss: 0.2289 - acc: 0.9000
Epoch 1396/1500
 - 2s - loss: 0.2291 - acc: 0.9001
Epoch 1397/1500
 - 2s - loss: 0.2298 - acc: 0.9001
Epoch 1398/1500
 - 2s - loss: 0.2301 - acc: 0.8997
Epoch 1399/1500
 - 3s - loss: 0.2288 - acc: 0.9000
Epoch 1400/1500
 - 2s - loss: 0.2289 - acc: 0.9001
Epoch 1401/1500
 - 2s - loss: 0.2286 - acc: 0.9003
Epoch 1402/1500
 - 3s - loss: 0.2293 - acc: 0.9002
Epoch 1403/1500
 - 3s - loss: 0.2290 - acc: 0.9001
Epoch 1404/1500
 - 3s - loss: 0.2290 - acc: 0.9000
Epoch 1405/1500
 - 3s - loss: 0.2286 - acc: 0.9004
Epoch 1406/1500
 - 3s - loss: 0.2300 - acc: 0.8996
Epoch 1407/1500
 - 3s - loss: 0.2288 - acc: 0.9002
Epoch 1408/1500
 - 3s - loss: 0.2288 - acc: 0.9000
Epoch 1409/1500
 - 3s - loss: 0.2288 - acc: 0.9003
Epoch 1410/1500
 - 2s - loss: 0.2290 - acc: 0.9001
Epoch 1411/1500
 - 2s - loss: 0.2292 - acc: 0.9003
Epoch 1412/1500
 - 3s - loss: 0.2292 - acc: 0.8998
Epoch 1413/1500
 - 2s - loss: 0.2290 - acc: 0.8999
Epoch 1414/1500
 - 2s - loss: 0.2287 - acc: 0.9000
Epoch 1415/1500
 - 2s - loss: 0.2297 - acc: 0.8998
Epoch 1416/1500
 - 2s - loss: 0.2286 - acc: 0.9003
Epoch 1417/1500
 - 2s - loss: 0.2286 - acc: 0.9004
Epoch 1418/1500
 - 3s - loss: 0.2291 - acc: 0.9001
Epoch 1419/1500
 - 2s - loss: 0.2282 - acc: 0.9003
Epoch 1420/1500
 - 2s - loss: 0.2285 - acc: 0.9003
Epoch 1421/1500
 - 2s - loss: 0.2287 - acc: 0.9001
Epoch 1422/1500
 - 2s - loss: 0.2293 - acc: 0.9000
Epoch 1423/1500
 - 2s - loss: 0.2285 - acc: 0.9003
Epoch 1424/1500
 - 2s - loss: 0.2288 - acc: 0.8998
Epoch 1425/1500
 - 3s - loss: 0.2283 - acc: 0.9003
Epoch 1426/1500
 - 2s - loss: 0.2288 - acc: 0.8999
Epoch 1427/1500
 - 2s - loss: 0.2285 - acc: 0.9003
Epoch 1428/1500
 - 2s - loss: 0.2290 - acc: 0.8999
Epoch 1429/1500
 - 2s - loss: 0.2284 - acc: 0.9002
Epoch 1430/1500
 - 2s - loss: 0.2286 - acc: 0.8997
Epoch 1431/1500
 - 3s - loss: 0.2289 - acc: 0.9000
Epoch 1432/1500
 - 3s - loss: 0.2294 - acc: 0.8996
Epoch 1433/1500
 - 2s - loss: 0.2284 - acc: 0.9004
Epoch 1434/1500
 - 2s - loss: 0.2287 - acc: 0.9006
Epoch 1435/1500
 - 2s - loss: 0.2282 - acc: 0.9002
Epoch 1436/1500
 - 2s - loss: 0.2290 - acc: 0.9000
Epoch 1437/1500
 - 3s - loss: 0.2283 - acc: 0.9004
Epoch 1438/1500
 - 3s - loss: 0.2285 - acc: 0.9000
Epoch 1439/1500
 - 2s - loss: 0.2289 - acc: 0.9000
Epoch 1440/1500
 - 2s - loss: 0.2286 - acc: 0.9001
Epoch 1441/1500
 - 2s - loss: 0.2304 - acc: 0.8995
Epoch 1442/1500
 - 2s - loss: 0.2292 - acc: 0.9002
Epoch 1443/1500
 - 2s - loss: 0.2290 - acc: 0.9003
Epoch 1444/1500
 - 3s - loss: 0.2286 - acc: 0.9000
Epoch 1445/1500
 - 2s - loss: 0.2287 - acc: 0.9000
Epoch 1446/1500
 - 2s - loss: 0.2289 - acc: 0.8999
Epoch 1447/1500
 - 2s - loss: 0.2289 - acc: 0.9000
Epoch 1448/1500
 - 2s - loss: 0.2289 - acc: 0.8999
Epoch 1449/1500
 - 2s - loss: 0.2288 - acc: 0.9002
Epoch 1450/1500
 - 3s - loss: 0.2288 - acc: 0.9001
Epoch 1451/1500
 - 3s - loss: 0.2289 - acc: 0.8999
Epoch 1452/1500
 - 3s - loss: 0.2295 - acc: 0.8996
Epoch 1453/1500
 - 2s - loss: 0.2290 - acc: 0.9002
Epoch 1454/1500
 - 3s - loss: 0.2288 - acc: 0.8999
Epoch 1455/1500
 - 3s - loss: 0.2287 - acc: 0.8997
Epoch 1456/1500
 - 3s - loss: 0.2293 - acc: 0.9002
Epoch 1457/1500
 - 3s - loss: 0.2280 - acc: 0.9003
Epoch 1458/1500
 - 2s - loss: 0.2285 - acc: 0.9002
Epoch 1459/1500
 - 2s - loss: 0.2282 - acc: 0.9004
Epoch 1460/1500
 - 2s - loss: 0.2284 - acc: 0.9002
Epoch 1461/1500
 - 2s - loss: 0.2291 - acc: 0.9000
Epoch 1462/1500
 - 2s - loss: 0.2288 - acc: 0.9005
Epoch 1463/1500
 - 3s - loss: 0.2283 - acc: 0.9001
Epoch 1464/1500
 - 2s - loss: 0.2284 - acc: 0.9004
Epoch 1465/1500
 - 2s - loss: 0.2285 - acc: 0.9001
Epoch 1466/1500
 - 2s - loss: 0.2287 - acc: 0.8996
Epoch 1467/1500
 - 2s - loss: 0.2286 - acc: 0.9004
Epoch 1468/1500
 - 2s - loss: 0.2284 - acc: 0.9000
Epoch 1469/1500
 - 3s - loss: 0.2286 - acc: 0.8999
Epoch 1470/1500
 - 2s - loss: 0.2282 - acc: 0.9002
Epoch 1471/1500
 - 2s - loss: 0.2287 - acc: 0.9004
Epoch 1472/1500
 - 2s - loss: 0.2285 - acc: 0.9004
Epoch 1473/1500
 - 2s - loss: 0.2289 - acc: 0.9003
Epoch 1474/1500
 - 2s - loss: 0.2284 - acc: 0.9006
Epoch 1475/1500
 - 2s - loss: 0.2285 - acc: 0.9000
Epoch 1476/1500
 - 3s - loss: 0.2283 - acc: 0.9003
Epoch 1477/1500
 - 3s - loss: 0.2288 - acc: 0.8999
Epoch 1478/1500
 - 3s - loss: 0.2280 - acc: 0.9006
Epoch 1479/1500
 - 3s - loss: 0.2279 - acc: 0.9004
Epoch 1480/1500
 - 3s - loss: 0.2284 - acc: 0.9000
Epoch 1481/1500
 - 3s - loss: 0.2293 - acc: 0.8997
Epoch 1482/1500
 - 3s - loss: 0.2290 - acc: 0.8997
Epoch 1483/1500
 - 3s - loss: 0.2287 - acc: 0.9001
Epoch 1484/1500
 - 3s - loss: 0.2284 - acc: 0.9001
Epoch 1485/1500
 - 2s - loss: 0.2288 - acc: 0.9003
Epoch 1486/1500
 - 2s - loss: 0.2292 - acc: 0.8995
Epoch 1487/1500
 - 2s - loss: 0.2286 - acc: 0.9000
Epoch 1488/1500
 - 3s - loss: 0.2283 - acc: 0.9002
Epoch 1489/1500
 - 2s - loss: 0.2280 - acc: 0.9002
Epoch 1490/1500
 - 2s - loss: 0.2290 - acc: 0.9000
Epoch 1491/1500
 - 2s - loss: 0.2289 - acc: 0.9002
Epoch 1492/1500
 - 2s - loss: 0.2289 - acc: 0.9004
Epoch 1493/1500
 - 2s - loss: 0.2283 - acc: 0.9003
Epoch 1494/1500
 - 3s - loss: 0.2290 - acc: 0.9002
Epoch 1495/1500
 - 3s - loss: 0.2277 - acc: 0.9006
Epoch 1496/1500
 - 2s - loss: 0.2282 - acc: 0.9007
Epoch 1497/1500
 - 2s - loss: 0.2284 - acc: 0.9001
Epoch 1498/1500
 - 2s - loss: 0.2287 - acc: 0.8999
Epoch 1499/1500
 - 2s - loss: 0.2285 - acc: 0.9000
Epoch 1500/1500
 - 2s - loss: 0.2285 - acc: 0.9001
116811/116811 [==============================] - 1s 11us/step

acc: 89.94%
Train Predictions:
467243/467243 [==============================] - 5s 10us/step
Score - 
acc: 89.99%
Test Predictions:
116811/116811 [==============================] - 1s 10us/step
Score - 
acc: 89.94%

MIscellineous